Test Report: KVM_Linux_crio 21643

                    
                      cc42fd2f8cec8fa883ff6f7397a2f6141c487062:2025-10-02:41725
                    
                

Test fail (14/330)

x
+
TestAddons/parallel/Registry (75.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.055714ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003902872s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004794557s
addons_test.go:392: (dbg) Run:  kubectl --context addons-535714 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-535714 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Non-zero exit: kubectl --context addons-535714 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.089840345s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted from default namespace

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:399: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-535714 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:403: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted from default namespace
*
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 ip
2025/10/02 07:02:23 [DEBUG] GET http://192.168.39.164:5000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-535714 -n addons-535714
helpers_test.go:252: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 logs -n 25: (2.053688083s)
helpers_test.go:260: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-760196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ -o=json --download-only -p download-only-169608 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ --download-only -p binary-mirror-257523 --alsologtostderr --binary-mirror http://127.0.0.1:33567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ -p binary-mirror-257523                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ addons  │ enable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ start   │ -p addons-535714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ enable headlamp -p addons-535714 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ ip      │ addons-535714 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:57:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:57:12.613104  566681 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:57:12.613401  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613412  566681 out.go:374] Setting ErrFile to fd 2...
	I1002 06:57:12.613416  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613691  566681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 06:57:12.614327  566681 out.go:368] Setting JSON to false
	I1002 06:57:12.615226  566681 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49183,"bootTime":1759339050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:57:12.615318  566681 start.go:140] virtualization: kvm guest
	I1002 06:57:12.616912  566681 out.go:179] * [addons-535714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:57:12.618030  566681 notify.go:220] Checking for updates...
	I1002 06:57:12.618070  566681 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:57:12.619267  566681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:57:12.620404  566681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:12.621815  566681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:12.622922  566681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:57:12.623998  566681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:57:12.625286  566681 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:57:12.655279  566681 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 06:57:12.656497  566681 start.go:304] selected driver: kvm2
	I1002 06:57:12.656511  566681 start.go:924] validating driver "kvm2" against <nil>
	I1002 06:57:12.656523  566681 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:57:12.657469  566681 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.657563  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.671466  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.671499  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.684735  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.684785  566681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:57:12.685037  566681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:57:12.685069  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:12.685110  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:12.685121  566681 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 06:57:12.685226  566681 start.go:348] cluster config:
	{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1002 06:57:12.685336  566681 iso.go:125] acquiring lock: {Name:mkf098c9edb59acf17bed04e42333d4ed092b943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.687549  566681 out.go:179] * Starting "addons-535714" primary control-plane node in "addons-535714" cluster
	I1002 06:57:12.688758  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:12.688809  566681 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:57:12.688824  566681 cache.go:58] Caching tarball of preloaded images
	I1002 06:57:12.688927  566681 preload.go:233] Found /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:57:12.688941  566681 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:57:12.689355  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:12.689385  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json: {Name:mkd226c1b0f282f7928061e8123511cda66ecb61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:12.689560  566681 start.go:360] acquireMachinesLock for addons-535714: {Name:mk200887a2360c0adfa27edc65d8cb08bb2838a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 06:57:12.689631  566681 start.go:364] duration metric: took 53.377µs to acquireMachinesLock for "addons-535714"
	I1002 06:57:12.689654  566681 start.go:93] Provisioning new machine with config: &{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:12.689738  566681 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 06:57:12.691999  566681 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 06:57:12.692183  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:12.692244  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:12.705101  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I1002 06:57:12.705724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:12.706300  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:12.706320  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:12.706770  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:12.707010  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:12.707209  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:12.707401  566681 start.go:159] libmachine.API.Create for "addons-535714" (driver="kvm2")
	I1002 06:57:12.707450  566681 client.go:168] LocalClient.Create starting
	I1002 06:57:12.707494  566681 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem
	I1002 06:57:12.888250  566681 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem
	I1002 06:57:13.081005  566681 main.go:141] libmachine: Running pre-create checks...
	I1002 06:57:13.081030  566681 main.go:141] libmachine: (addons-535714) Calling .PreCreateCheck
	I1002 06:57:13.081598  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:13.082053  566681 main.go:141] libmachine: Creating machine...
	I1002 06:57:13.082069  566681 main.go:141] libmachine: (addons-535714) Calling .Create
	I1002 06:57:13.082276  566681 main.go:141] libmachine: (addons-535714) creating domain...
	I1002 06:57:13.082300  566681 main.go:141] libmachine: (addons-535714) creating network...
	I1002 06:57:13.083762  566681 main.go:141] libmachine: (addons-535714) DBG | found existing default network
	I1002 06:57:13.084004  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084021  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>default</name>
	I1002 06:57:13.084029  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 06:57:13.084036  566681 main.go:141] libmachine: (addons-535714) DBG |   <forward mode='nat'>
	I1002 06:57:13.084041  566681 main.go:141] libmachine: (addons-535714) DBG |     <nat>
	I1002 06:57:13.084047  566681 main.go:141] libmachine: (addons-535714) DBG |       <port start='1024' end='65535'/>
	I1002 06:57:13.084051  566681 main.go:141] libmachine: (addons-535714) DBG |     </nat>
	I1002 06:57:13.084055  566681 main.go:141] libmachine: (addons-535714) DBG |   </forward>
	I1002 06:57:13.084061  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 06:57:13.084068  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 06:57:13.084084  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 06:57:13.084098  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084111  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 06:57:13.084123  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084131  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084152  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084191  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084749  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.084601  566709 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1002 06:57:13.084771  566681 main.go:141] libmachine: (addons-535714) DBG | defining private network:
	I1002 06:57:13.084780  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084785  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084801  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.084820  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.084831  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.084840  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084851  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.084861  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084868  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084878  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084888  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.090767  566681 main.go:141] libmachine: (addons-535714) DBG | creating private network mk-addons-535714 192.168.39.0/24...
	I1002 06:57:13.158975  566681 main.go:141] libmachine: (addons-535714) DBG | private network mk-addons-535714 192.168.39.0/24 created
	I1002 06:57:13.159275  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.159307  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.159316  566681 main.go:141] libmachine: (addons-535714) setting up store path in /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.159335  566681 main.go:141] libmachine: (addons-535714) building disk image from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 06:57:13.159343  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>30f68bcb-0ec3-45ac-9012-251c5feb215b</uuid>
	I1002 06:57:13.159350  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 06:57:13.159356  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:03:a3:ce'/>
	I1002 06:57:13.159360  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.159383  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.159402  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.159413  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.159428  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.159461  566681 main.go:141] libmachine: (addons-535714) Downloading /home/jenkins/minikube-integration/21643-562157/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 06:57:13.159477  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.159489  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.159500  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.159522  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.159293  566709 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.427161  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.426986  566709 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa...
	I1002 06:57:13.691596  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691434  566709 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk...
	I1002 06:57:13.691620  566681 main.go:141] libmachine: (addons-535714) DBG | Writing magic tar header
	I1002 06:57:13.691651  566681 main.go:141] libmachine: (addons-535714) DBG | Writing SSH key tar header
	I1002 06:57:13.691660  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691559  566709 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.691671  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714
	I1002 06:57:13.691678  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines
	I1002 06:57:13.691687  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.691694  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157
	I1002 06:57:13.691702  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 06:57:13.691710  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins
	I1002 06:57:13.691724  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 (perms=drwx------)
	I1002 06:57:13.691738  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines (perms=drwxr-xr-x)
	I1002 06:57:13.691747  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home
	I1002 06:57:13.691758  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube (perms=drwxr-xr-x)
	I1002 06:57:13.691769  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157 (perms=drwxrwxr-x)
	I1002 06:57:13.691781  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 06:57:13.691789  566681 main.go:141] libmachine: (addons-535714) DBG | skipping /home - not owner
	I1002 06:57:13.691803  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 06:57:13.691811  566681 main.go:141] libmachine: (addons-535714) defining domain...
	I1002 06:57:13.693046  566681 main.go:141] libmachine: (addons-535714) defining domain using XML: 
	I1002 06:57:13.693074  566681 main.go:141] libmachine: (addons-535714) <domain type='kvm'>
	I1002 06:57:13.693080  566681 main.go:141] libmachine: (addons-535714)   <name>addons-535714</name>
	I1002 06:57:13.693085  566681 main.go:141] libmachine: (addons-535714)   <memory unit='MiB'>4096</memory>
	I1002 06:57:13.693090  566681 main.go:141] libmachine: (addons-535714)   <vcpu>2</vcpu>
	I1002 06:57:13.693093  566681 main.go:141] libmachine: (addons-535714)   <features>
	I1002 06:57:13.693098  566681 main.go:141] libmachine: (addons-535714)     <acpi/>
	I1002 06:57:13.693102  566681 main.go:141] libmachine: (addons-535714)     <apic/>
	I1002 06:57:13.693109  566681 main.go:141] libmachine: (addons-535714)     <pae/>
	I1002 06:57:13.693115  566681 main.go:141] libmachine: (addons-535714)   </features>
	I1002 06:57:13.693124  566681 main.go:141] libmachine: (addons-535714)   <cpu mode='host-passthrough'>
	I1002 06:57:13.693132  566681 main.go:141] libmachine: (addons-535714)   </cpu>
	I1002 06:57:13.693155  566681 main.go:141] libmachine: (addons-535714)   <os>
	I1002 06:57:13.693163  566681 main.go:141] libmachine: (addons-535714)     <type>hvm</type>
	I1002 06:57:13.693172  566681 main.go:141] libmachine: (addons-535714)     <boot dev='cdrom'/>
	I1002 06:57:13.693186  566681 main.go:141] libmachine: (addons-535714)     <boot dev='hd'/>
	I1002 06:57:13.693192  566681 main.go:141] libmachine: (addons-535714)     <bootmenu enable='no'/>
	I1002 06:57:13.693197  566681 main.go:141] libmachine: (addons-535714)   </os>
	I1002 06:57:13.693202  566681 main.go:141] libmachine: (addons-535714)   <devices>
	I1002 06:57:13.693207  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='cdrom'>
	I1002 06:57:13.693215  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.693220  566681 main.go:141] libmachine: (addons-535714)       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.693225  566681 main.go:141] libmachine: (addons-535714)       <readonly/>
	I1002 06:57:13.693231  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693240  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='disk'>
	I1002 06:57:13.693255  566681 main.go:141] libmachine: (addons-535714)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 06:57:13.693309  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.693334  566681 main.go:141] libmachine: (addons-535714)       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.693341  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693357  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693371  566681 main.go:141] libmachine: (addons-535714)       <source network='mk-addons-535714'/>
	I1002 06:57:13.693378  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693391  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693399  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693411  566681 main.go:141] libmachine: (addons-535714)       <source network='default'/>
	I1002 06:57:13.693416  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693435  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693445  566681 main.go:141] libmachine: (addons-535714)     <serial type='pty'>
	I1002 06:57:13.693480  566681 main.go:141] libmachine: (addons-535714)       <target port='0'/>
	I1002 06:57:13.693520  566681 main.go:141] libmachine: (addons-535714)     </serial>
	I1002 06:57:13.693540  566681 main.go:141] libmachine: (addons-535714)     <console type='pty'>
	I1002 06:57:13.693552  566681 main.go:141] libmachine: (addons-535714)       <target type='serial' port='0'/>
	I1002 06:57:13.693564  566681 main.go:141] libmachine: (addons-535714)     </console>
	I1002 06:57:13.693575  566681 main.go:141] libmachine: (addons-535714)     <rng model='virtio'>
	I1002 06:57:13.693588  566681 main.go:141] libmachine: (addons-535714)       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.693598  566681 main.go:141] libmachine: (addons-535714)     </rng>
	I1002 06:57:13.693609  566681 main.go:141] libmachine: (addons-535714)   </devices>
	I1002 06:57:13.693618  566681 main.go:141] libmachine: (addons-535714) </domain>
	I1002 06:57:13.693631  566681 main.go:141] libmachine: (addons-535714) 
	I1002 06:57:13.698471  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:ff:9b:2c in network default
	I1002 06:57:13.699181  566681 main.go:141] libmachine: (addons-535714) starting domain...
	I1002 06:57:13.699210  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:13.699219  566681 main.go:141] libmachine: (addons-535714) ensuring networks are active...
	I1002 06:57:13.699886  566681 main.go:141] libmachine: (addons-535714) Ensuring network default is active
	I1002 06:57:13.700240  566681 main.go:141] libmachine: (addons-535714) Ensuring network mk-addons-535714 is active
	I1002 06:57:13.700911  566681 main.go:141] libmachine: (addons-535714) getting domain XML...
	I1002 06:57:13.701998  566681 main.go:141] libmachine: (addons-535714) DBG | starting domain XML:
	I1002 06:57:13.702019  566681 main.go:141] libmachine: (addons-535714) DBG | <domain type='kvm'>
	I1002 06:57:13.702029  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>addons-535714</name>
	I1002 06:57:13.702036  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>26ed18e3-cae3-43e2-ba2a-85be4a0a7371</uuid>
	I1002 06:57:13.702049  566681 main.go:141] libmachine: (addons-535714) DBG |   <memory unit='KiB'>4194304</memory>
	I1002 06:57:13.702060  566681 main.go:141] libmachine: (addons-535714) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1002 06:57:13.702069  566681 main.go:141] libmachine: (addons-535714) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 06:57:13.702075  566681 main.go:141] libmachine: (addons-535714) DBG |   <os>
	I1002 06:57:13.702085  566681 main.go:141] libmachine: (addons-535714) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 06:57:13.702093  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='cdrom'/>
	I1002 06:57:13.702101  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='hd'/>
	I1002 06:57:13.702116  566681 main.go:141] libmachine: (addons-535714) DBG |     <bootmenu enable='no'/>
	I1002 06:57:13.702127  566681 main.go:141] libmachine: (addons-535714) DBG |   </os>
	I1002 06:57:13.702134  566681 main.go:141] libmachine: (addons-535714) DBG |   <features>
	I1002 06:57:13.702180  566681 main.go:141] libmachine: (addons-535714) DBG |     <acpi/>
	I1002 06:57:13.702204  566681 main.go:141] libmachine: (addons-535714) DBG |     <apic/>
	I1002 06:57:13.702215  566681 main.go:141] libmachine: (addons-535714) DBG |     <pae/>
	I1002 06:57:13.702220  566681 main.go:141] libmachine: (addons-535714) DBG |   </features>
	I1002 06:57:13.702241  566681 main.go:141] libmachine: (addons-535714) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 06:57:13.702256  566681 main.go:141] libmachine: (addons-535714) DBG |   <clock offset='utc'/>
	I1002 06:57:13.702265  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 06:57:13.702283  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_reboot>restart</on_reboot>
	I1002 06:57:13.702295  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_crash>destroy</on_crash>
	I1002 06:57:13.702305  566681 main.go:141] libmachine: (addons-535714) DBG |   <devices>
	I1002 06:57:13.702317  566681 main.go:141] libmachine: (addons-535714) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 06:57:13.702328  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='cdrom'>
	I1002 06:57:13.702340  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw'/>
	I1002 06:57:13.702352  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.702364  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.702375  566681 main.go:141] libmachine: (addons-535714) DBG |       <readonly/>
	I1002 06:57:13.702387  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 06:57:13.702398  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702419  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='disk'>
	I1002 06:57:13.702432  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 06:57:13.702451  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.702462  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.702472  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 06:57:13.702482  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702490  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 06:57:13.702503  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 06:57:13.702512  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702521  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 06:57:13.702535  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 06:57:13.702589  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 06:57:13.702612  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702624  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702630  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:00:74:bc'/>
	I1002 06:57:13.702639  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='mk-addons-535714'/>
	I1002 06:57:13.702646  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702658  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 06:57:13.702665  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702675  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702687  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:ff:9b:2c'/>
	I1002 06:57:13.702697  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='default'/>
	I1002 06:57:13.702707  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702719  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 06:57:13.702730  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702740  566681 main.go:141] libmachine: (addons-535714) DBG |     <serial type='pty'>
	I1002 06:57:13.702751  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='isa-serial' port='0'>
	I1002 06:57:13.702765  566681 main.go:141] libmachine: (addons-535714) DBG |         <model name='isa-serial'/>
	I1002 06:57:13.702775  566681 main.go:141] libmachine: (addons-535714) DBG |       </target>
	I1002 06:57:13.702784  566681 main.go:141] libmachine: (addons-535714) DBG |     </serial>
	I1002 06:57:13.702806  566681 main.go:141] libmachine: (addons-535714) DBG |     <console type='pty'>
	I1002 06:57:13.702820  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='serial' port='0'/>
	I1002 06:57:13.702827  566681 main.go:141] libmachine: (addons-535714) DBG |     </console>
	I1002 06:57:13.702839  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='mouse' bus='ps2'/>
	I1002 06:57:13.702850  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 06:57:13.702861  566681 main.go:141] libmachine: (addons-535714) DBG |     <audio id='1' type='none'/>
	I1002 06:57:13.702881  566681 main.go:141] libmachine: (addons-535714) DBG |     <memballoon model='virtio'>
	I1002 06:57:13.702895  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 06:57:13.702901  566681 main.go:141] libmachine: (addons-535714) DBG |     </memballoon>
	I1002 06:57:13.702910  566681 main.go:141] libmachine: (addons-535714) DBG |     <rng model='virtio'>
	I1002 06:57:13.702918  566681 main.go:141] libmachine: (addons-535714) DBG |       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.702929  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 06:57:13.702944  566681 main.go:141] libmachine: (addons-535714) DBG |     </rng>
	I1002 06:57:13.702957  566681 main.go:141] libmachine: (addons-535714) DBG |   </devices>
	I1002 06:57:13.702972  566681 main.go:141] libmachine: (addons-535714) DBG | </domain>
	I1002 06:57:13.702987  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:14.963247  566681 main.go:141] libmachine: (addons-535714) waiting for domain to start...
	I1002 06:57:14.964664  566681 main.go:141] libmachine: (addons-535714) domain is now running
	I1002 06:57:14.964695  566681 main.go:141] libmachine: (addons-535714) waiting for IP...
	I1002 06:57:14.965420  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:14.966032  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:14.966060  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:14.966362  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:14.966431  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:14.966367  566709 retry.go:31] will retry after 210.201926ms: waiting for domain to come up
	I1002 06:57:15.178058  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.178797  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.178832  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.179051  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.179089  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.179030  566709 retry.go:31] will retry after 312.318729ms: waiting for domain to come up
	I1002 06:57:15.493036  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.493844  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.493865  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.494158  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.494260  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.494172  566709 retry.go:31] will retry after 379.144998ms: waiting for domain to come up
	I1002 06:57:15.874866  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.875597  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.875618  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.875940  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.875972  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.875891  566709 retry.go:31] will retry after 392.719807ms: waiting for domain to come up
	I1002 06:57:16.270678  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.271369  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.271417  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.271795  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.271822  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.271752  566709 retry.go:31] will retry after 502.852746ms: waiting for domain to come up
	I1002 06:57:16.776382  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.777033  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.777083  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.777418  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.777452  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.777390  566709 retry.go:31] will retry after 817.041708ms: waiting for domain to come up
	I1002 06:57:17.596403  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:17.597002  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:17.597037  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:17.597304  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:17.597337  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:17.597286  566709 retry.go:31] will retry after 1.129250566s: waiting for domain to come up
	I1002 06:57:18.728727  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:18.729410  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:18.729438  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:18.729739  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:18.729770  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:18.729716  566709 retry.go:31] will retry after 1.486801145s: waiting for domain to come up
	I1002 06:57:20.218801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:20.219514  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:20.219546  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:20.219811  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:20.219864  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:20.219802  566709 retry.go:31] will retry after 1.676409542s: waiting for domain to come up
	I1002 06:57:21.898812  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:21.899513  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:21.899536  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:21.899819  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:21.899877  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:21.899808  566709 retry.go:31] will retry after 1.43578276s: waiting for domain to come up
	I1002 06:57:23.337598  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:23.338214  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:23.338235  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:23.338569  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:23.338642  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:23.338553  566709 retry.go:31] will retry after 2.182622976s: waiting for domain to come up
	I1002 06:57:25.524305  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:25.524996  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:25.525030  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:25.525352  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:25.525383  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:25.525329  566709 retry.go:31] will retry after 2.567637867s: waiting for domain to come up
	I1002 06:57:28.094839  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:28.095351  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:28.095371  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:28.095666  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:28.095696  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:28.095635  566709 retry.go:31] will retry after 3.838879921s: waiting for domain to come up
	I1002 06:57:31.938799  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939560  566681 main.go:141] libmachine: (addons-535714) found domain IP: 192.168.39.164
	I1002 06:57:31.939593  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has current primary IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939601  566681 main.go:141] libmachine: (addons-535714) reserving static IP address...
	I1002 06:57:31.940101  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find host DHCP lease matching {name: "addons-535714", mac: "52:54:00:00:74:bc", ip: "192.168.39.164"} in network mk-addons-535714
	I1002 06:57:32.153010  566681 main.go:141] libmachine: (addons-535714) DBG | Getting to WaitForSSH function...
	I1002 06:57:32.153043  566681 main.go:141] libmachine: (addons-535714) reserved static IP address 192.168.39.164 for domain addons-535714
	I1002 06:57:32.153056  566681 main.go:141] libmachine: (addons-535714) waiting for SSH...
	I1002 06:57:32.156675  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157263  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.157288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157522  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH client type: external
	I1002 06:57:32.157548  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH private key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa (-rw-------)
	I1002 06:57:32.157582  566681 main.go:141] libmachine: (addons-535714) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 06:57:32.157609  566681 main.go:141] libmachine: (addons-535714) DBG | About to run SSH command:
	I1002 06:57:32.157620  566681 main.go:141] libmachine: (addons-535714) DBG | exit 0
	I1002 06:57:32.286418  566681 main.go:141] libmachine: (addons-535714) DBG | SSH cmd err, output: <nil>: 
	I1002 06:57:32.286733  566681 main.go:141] libmachine: (addons-535714) domain creation complete
	I1002 06:57:32.287044  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:32.287640  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288020  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288207  566681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 06:57:32.288223  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:32.289782  566681 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 06:57:32.289795  566681 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 06:57:32.289800  566681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 06:57:32.289805  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.292433  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.292851  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.292897  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.293050  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.293317  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293481  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293658  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.293813  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.294063  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.294076  566681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 06:57:32.392654  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.392681  566681 main.go:141] libmachine: Detecting the provisioner...
	I1002 06:57:32.392690  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.396029  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396454  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.396486  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396681  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.396903  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397079  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397260  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.397412  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.397680  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.397696  566681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 06:57:32.501992  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 06:57:32.502093  566681 main.go:141] libmachine: found compatible host: buildroot
	I1002 06:57:32.502117  566681 main.go:141] libmachine: Provisioning with buildroot...
	I1002 06:57:32.502131  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502439  566681 buildroot.go:166] provisioning hostname "addons-535714"
	I1002 06:57:32.502476  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502701  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.506170  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506653  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.506716  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506786  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.507040  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507252  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507426  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.507729  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.507997  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.508013  566681 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-535714 && echo "addons-535714" | sudo tee /etc/hostname
	I1002 06:57:32.632360  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-535714
	
	I1002 06:57:32.632404  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.635804  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636293  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.636319  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636574  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.636804  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.636969  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.637110  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.637297  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.637584  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.637613  566681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-535714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-535714/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-535714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:57:32.752063  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.752119  566681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21643-562157/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-562157/.minikube}
	I1002 06:57:32.752193  566681 buildroot.go:174] setting up certificates
	I1002 06:57:32.752210  566681 provision.go:84] configureAuth start
	I1002 06:57:32.752256  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.752721  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:32.756026  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756514  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.756545  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756704  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.759506  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.759945  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.759972  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.760113  566681 provision.go:143] copyHostCerts
	I1002 06:57:32.760210  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem (1123 bytes)
	I1002 06:57:32.760331  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem (1675 bytes)
	I1002 06:57:32.760392  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem (1078 bytes)
	I1002 06:57:32.760440  566681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem org=jenkins.addons-535714 san=[127.0.0.1 192.168.39.164 addons-535714 localhost minikube]
	I1002 06:57:32.997259  566681 provision.go:177] copyRemoteCerts
	I1002 06:57:32.997339  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:57:32.997365  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.001746  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002246  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.002275  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002606  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.002841  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.003067  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.003261  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.087811  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:57:33.120074  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:57:33.152344  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:57:33.183560  566681 provision.go:87] duration metric: took 431.305231ms to configureAuth
	I1002 06:57:33.183592  566681 buildroot.go:189] setting minikube options for container-runtime
	I1002 06:57:33.183785  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:33.183901  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.187438  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.187801  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.187825  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.188034  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.188285  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188508  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188682  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.188927  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.189221  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.189246  566681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:57:33.455871  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:57:33.455896  566681 main.go:141] libmachine: Checking connection to Docker...
	I1002 06:57:33.455904  566681 main.go:141] libmachine: (addons-535714) Calling .GetURL
	I1002 06:57:33.457296  566681 main.go:141] libmachine: (addons-535714) DBG | using libvirt version 8000000
	I1002 06:57:33.460125  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460550  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.460582  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460738  566681 main.go:141] libmachine: Docker is up and running!
	I1002 06:57:33.460770  566681 main.go:141] libmachine: Reticulating splines...
	I1002 06:57:33.460780  566681 client.go:171] duration metric: took 20.753318284s to LocalClient.Create
	I1002 06:57:33.460805  566681 start.go:167] duration metric: took 20.753406484s to libmachine.API.Create "addons-535714"
	I1002 06:57:33.460815  566681 start.go:293] postStartSetup for "addons-535714" (driver="kvm2")
	I1002 06:57:33.460824  566681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:57:33.460841  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.461104  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:57:33.461149  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.463666  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464001  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.464024  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.464486  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.464662  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.464805  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.547032  566681 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:57:33.552379  566681 info.go:137] Remote host: Buildroot 2025.02
	I1002 06:57:33.552408  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/addons for local assets ...
	I1002 06:57:33.552489  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/files for local assets ...
	I1002 06:57:33.552524  566681 start.go:296] duration metric: took 91.702797ms for postStartSetup
	I1002 06:57:33.552573  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:33.553229  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.556294  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556659  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.556691  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556979  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:33.557200  566681 start.go:128] duration metric: took 20.867433906s to createHost
	I1002 06:57:33.557235  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.559569  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.559976  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.560033  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.560209  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.560387  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560524  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560647  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.560782  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.561006  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.561024  566681 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 06:57:33.663941  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759388253.625480282
	
	I1002 06:57:33.663966  566681 fix.go:216] guest clock: 1759388253.625480282
	I1002 06:57:33.663974  566681 fix.go:229] Guest: 2025-10-02 06:57:33.625480282 +0000 UTC Remote: 2025-10-02 06:57:33.557215192 +0000 UTC m=+20.980868887 (delta=68.26509ms)
	I1002 06:57:33.664010  566681 fix.go:200] guest clock delta is within tolerance: 68.26509ms
	I1002 06:57:33.664022  566681 start.go:83] releasing machines lock for "addons-535714", held for 20.974372731s
	I1002 06:57:33.664050  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.664374  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.667827  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668310  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.668344  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668518  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669079  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669275  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669418  566681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:57:33.669466  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.669473  566681 ssh_runner.go:195] Run: cat /version.json
	I1002 06:57:33.669492  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.672964  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673168  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673457  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673495  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673642  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673878  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674001  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.674093  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674177  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674268  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.674352  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674502  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.752747  566681 ssh_runner.go:195] Run: systemctl --version
	I1002 06:57:33.777712  566681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:57:33.941402  566681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:57:33.949414  566681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:57:33.949490  566681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:57:33.971089  566681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:57:33.971121  566681 start.go:495] detecting cgroup driver to use...
	I1002 06:57:33.971215  566681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:57:33.990997  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:57:34.009642  566681 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:57:34.009719  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:57:34.028675  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:57:34.045011  566681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:57:34.191090  566681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:57:34.404836  566681 docker.go:234] disabling docker service ...
	I1002 06:57:34.404915  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:57:34.421846  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:57:34.437815  566681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:57:34.593256  566681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:57:34.739807  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:57:34.755656  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:57:34.780318  566681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:57:34.780381  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.794344  566681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 06:57:34.794437  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.807921  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.821174  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.834265  566681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:57:34.848039  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.861013  566681 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.882928  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.895874  566681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:57:34.906834  566681 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 06:57:34.906902  566681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 06:57:34.930283  566681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:57:34.944196  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:35.086744  566681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:57:35.203118  566681 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:57:35.203247  566681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:57:35.208872  566681 start.go:563] Will wait 60s for crictl version
	I1002 06:57:35.208951  566681 ssh_runner.go:195] Run: which crictl
	I1002 06:57:35.213165  566681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 06:57:35.254690  566681 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 06:57:35.254809  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.285339  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.318360  566681 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 06:57:35.319680  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:35.322840  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323187  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:35.323215  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323541  566681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 06:57:35.328294  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:35.344278  566681 kubeadm.go:883] updating cluster {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:57:35.344381  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:35.344426  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:35.382419  566681 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 06:57:35.382487  566681 ssh_runner.go:195] Run: which lz4
	I1002 06:57:35.386980  566681 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 06:57:35.392427  566681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 06:57:35.392457  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1002 06:57:36.901929  566681 crio.go:462] duration metric: took 1.514994717s to copy over tarball
	I1002 06:57:36.902020  566681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 06:57:38.487982  566681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.585912508s)
	I1002 06:57:38.488018  566681 crio.go:469] duration metric: took 1.586055344s to extract the tarball
	I1002 06:57:38.488028  566681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 06:57:38.530041  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:38.574743  566681 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:57:38.574771  566681 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:57:38.574780  566681 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.34.1 crio true true} ...
	I1002 06:57:38.574907  566681 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-535714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:57:38.574982  566681 ssh_runner.go:195] Run: crio config
	I1002 06:57:38.626077  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:38.626100  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:38.626114  566681 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:57:38.626157  566681 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-535714 NodeName:addons-535714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:57:38.626290  566681 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-535714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:57:38.626379  566681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:57:38.638875  566681 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:57:38.638942  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:57:38.650923  566681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1002 06:57:38.672765  566681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:57:38.695198  566681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 06:57:38.716738  566681 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I1002 06:57:38.721153  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:38.736469  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:38.882003  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:38.903662  566681 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714 for IP: 192.168.39.164
	I1002 06:57:38.903695  566681 certs.go:195] generating shared ca certs ...
	I1002 06:57:38.903722  566681 certs.go:227] acquiring lock for ca certs: {Name:mk8e87648e070d331709ecc08a93a441c20cc0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.903919  566681 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key
	I1002 06:57:38.961629  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt ...
	I1002 06:57:38.961659  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt: {Name:mkce3dd067e2e7843e2a288d28dbaf57f057aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961829  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key ...
	I1002 06:57:38.961841  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key: {Name:mka327360c05168b3164194068242bb15d511ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961939  566681 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key
	I1002 06:57:39.050167  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt ...
	I1002 06:57:39.050199  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt: {Name:mkf18fa19ddf5ebcd4669a9a2e369e414c03725b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050375  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key ...
	I1002 06:57:39.050388  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key: {Name:mk774f61354e64c5344d2d0d059164fff9076c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050460  566681 certs.go:257] generating profile certs ...
	I1002 06:57:39.050516  566681 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key
	I1002 06:57:39.050537  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt with IP's: []
	I1002 06:57:39.147298  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt ...
	I1002 06:57:39.147330  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: {Name:mk17b498d515b2f43666faa03b17d7223c9a8157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147495  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key ...
	I1002 06:57:39.147505  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key: {Name:mke1e8140b8916f87dd85d98abe8a51503f6e4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147578  566681 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed
	I1002 06:57:39.147597  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164]
	I1002 06:57:39.310236  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed ...
	I1002 06:57:39.310266  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed: {Name:mk247c08955d8ed7427926c7244db21ffe837768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310428  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed ...
	I1002 06:57:39.310441  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed: {Name:mkc3fa16c2fd82a07eac700fa655e28a42c60f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310525  566681 certs.go:382] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt
	I1002 06:57:39.310624  566681 certs.go:386] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key
	I1002 06:57:39.310682  566681 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key
	I1002 06:57:39.310701  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt with IP's: []
	I1002 06:57:39.497350  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt ...
	I1002 06:57:39.497386  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt: {Name:mk4f28529f4cee1ff8311028b7bb7fc35a77bba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497555  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key ...
	I1002 06:57:39.497569  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key: {Name:mkfac0b0a329edb8634114371202cb4ba011c129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497750  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:57:39.497784  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:57:39.497808  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:57:39.497835  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem (1675 bytes)
	I1002 06:57:39.498475  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:57:39.530649  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:57:39.561340  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:57:39.593844  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:57:39.629628  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:57:39.668367  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:57:39.699924  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:57:39.730177  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:57:39.761107  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:57:39.791592  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:57:39.813294  566681 ssh_runner.go:195] Run: openssl version
	I1002 06:57:39.820587  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:57:39.834664  566681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840283  566681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:57 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840348  566681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.848412  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:57:39.863027  566681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:57:39.868269  566681 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:57:39.868325  566681 kubeadm.go:400] StartCluster: {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:57:39.868408  566681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:57:39.868500  566681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:57:39.910571  566681 cri.go:89] found id: ""
	I1002 06:57:39.910645  566681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:57:39.923825  566681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:57:39.936522  566681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:57:39.949191  566681 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:57:39.949214  566681 kubeadm.go:157] found existing configuration files:
	
	I1002 06:57:39.949292  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:57:39.961561  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:57:39.961637  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:57:39.974337  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:57:39.986029  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:57:39.986104  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:57:39.997992  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.008894  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:57:40.008966  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.021235  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:57:40.032694  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:57:40.032754  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:57:40.045554  566681 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 06:57:40.211362  566681 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:57:51.799597  566681 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:57:51.799689  566681 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:57:51.799798  566681 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:57:51.799950  566681 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:57:51.800082  566681 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:57:51.800206  566681 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:57:51.802349  566681 out.go:252]   - Generating certificates and keys ...
	I1002 06:57:51.802439  566681 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:57:51.802492  566681 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:57:51.802586  566681 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:57:51.802729  566681 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:57:51.802823  566681 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:57:51.802894  566681 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:57:51.802944  566681 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:57:51.803058  566681 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803125  566681 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:57:51.803276  566681 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803350  566681 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:57:51.803420  566681 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:57:51.803491  566681 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:57:51.803557  566681 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:57:51.803634  566681 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:57:51.803717  566681 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:57:51.803807  566681 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:57:51.803899  566681 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:57:51.803950  566681 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:57:51.804029  566681 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:57:51.804088  566681 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:57:51.805702  566681 out.go:252]   - Booting up control plane ...
	I1002 06:57:51.805781  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:57:51.805846  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:57:51.805929  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:57:51.806028  566681 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:57:51.806148  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:57:51.806260  566681 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:57:51.806361  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:57:51.806420  566681 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:57:51.806575  566681 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:57:51.806669  566681 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:57:51.806717  566681 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.672587ms
	I1002 06:57:51.806806  566681 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:57:51.806892  566681 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.164:8443/livez
	I1002 06:57:51.806963  566681 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:57:51.807067  566681 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:57:51.807185  566681 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.362189492s
	I1002 06:57:51.807284  566681 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.802664802s
	I1002 06:57:51.807338  566681 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003805488s
	I1002 06:57:51.807453  566681 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:57:51.807587  566681 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:57:51.807642  566681 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:57:51.807816  566681 kubeadm.go:318] [mark-control-plane] Marking the node addons-535714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:57:51.807890  566681 kubeadm.go:318] [bootstrap-token] Using token: 7tuk3k.1448ee54qv9op8vd
	I1002 06:57:51.810266  566681 out.go:252]   - Configuring RBAC rules ...
	I1002 06:57:51.810355  566681 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:57:51.810443  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:57:51.810582  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:57:51.810746  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:57:51.810922  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:57:51.811039  566681 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:57:51.811131  566681 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:57:51.811203  566681 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:57:51.811259  566681 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:57:51.811271  566681 kubeadm.go:318] 
	I1002 06:57:51.811321  566681 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:57:51.811327  566681 kubeadm.go:318] 
	I1002 06:57:51.811408  566681 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:57:51.811416  566681 kubeadm.go:318] 
	I1002 06:57:51.811438  566681 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:57:51.811524  566681 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:57:51.811568  566681 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:57:51.811574  566681 kubeadm.go:318] 
	I1002 06:57:51.811638  566681 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:57:51.811650  566681 kubeadm.go:318] 
	I1002 06:57:51.811704  566681 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:57:51.811711  566681 kubeadm.go:318] 
	I1002 06:57:51.811751  566681 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:57:51.811811  566681 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:57:51.811912  566681 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:57:51.811926  566681 kubeadm.go:318] 
	I1002 06:57:51.812042  566681 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:57:51.812153  566681 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:57:51.812165  566681 kubeadm.go:318] 
	I1002 06:57:51.812280  566681 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812417  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 \
	I1002 06:57:51.812453  566681 kubeadm.go:318] 	--control-plane 
	I1002 06:57:51.812464  566681 kubeadm.go:318] 
	I1002 06:57:51.812595  566681 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:57:51.812615  566681 kubeadm.go:318] 
	I1002 06:57:51.812711  566681 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812863  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 
	I1002 06:57:51.812931  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:51.812944  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:51.815686  566681 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 06:57:51.817060  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 06:57:51.834402  566681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 06:57:51.858951  566681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:57:51.859117  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:51.859124  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-535714 minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-535714 minikube.k8s.io/primary=true
	I1002 06:57:51.921378  566681 ops.go:34] apiserver oom_adj: -16
	I1002 06:57:52.030323  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:52.531214  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.031113  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.531050  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.030867  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.531128  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.030521  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.530702  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.030762  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.196068  566681 kubeadm.go:1113] duration metric: took 4.337043927s to wait for elevateKubeSystemPrivileges
	I1002 06:57:56.196100  566681 kubeadm.go:402] duration metric: took 16.3277794s to StartCluster
	I1002 06:57:56.196121  566681 settings.go:142] acquiring lock: {Name:mkde88de9cc28e670cb4891970fce50579712197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.196294  566681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:56.196768  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/kubeconfig: {Name:mkaba69145ae0ebd7ee7f396e649d41ddd82691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.197012  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:57:56.197039  566681 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:56.197157  566681 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:57:56.197305  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.197326  566681 addons.go:69] Setting ingress=true in profile "addons-535714"
	I1002 06:57:56.197323  566681 addons.go:69] Setting default-storageclass=true in profile "addons-535714"
	I1002 06:57:56.197353  566681 addons.go:238] Setting addon ingress=true in "addons-535714"
	I1002 06:57:56.197360  566681 addons.go:69] Setting registry=true in profile "addons-535714"
	I1002 06:57:56.197367  566681 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-535714"
	I1002 06:57:56.197376  566681 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-535714"
	I1002 06:57:56.197382  566681 addons.go:69] Setting volumesnapshots=true in profile "addons-535714"
	I1002 06:57:56.197391  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-535714"
	I1002 06:57:56.197393  566681 addons.go:69] Setting ingress-dns=true in profile "addons-535714"
	I1002 06:57:56.197397  566681 addons.go:238] Setting addon volumesnapshots=true in "addons-535714"
	I1002 06:57:56.197403  566681 addons.go:238] Setting addon ingress-dns=true in "addons-535714"
	I1002 06:57:56.197413  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197417  566681 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:57:56.197432  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197438  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197454  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197317  566681 addons.go:69] Setting gcp-auth=true in profile "addons-535714"
	I1002 06:57:56.197804  566681 addons.go:69] Setting metrics-server=true in profile "addons-535714"
	I1002 06:57:56.197813  566681 mustload.go:65] Loading cluster: addons-535714
	I1002 06:57:56.197822  566681 addons.go:238] Setting addon metrics-server=true in "addons-535714"
	I1002 06:57:56.197849  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197953  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197985  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197348  566681 addons.go:69] Setting cloud-spanner=true in profile "addons-535714"
	I1002 06:57:56.197995  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198002  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.198025  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198027  566681 addons.go:69] Setting inspektor-gadget=true in profile "addons-535714"
	I1002 06:57:56.198034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198040  566681 addons.go:238] Setting addon inspektor-gadget=true in "addons-535714"
	I1002 06:57:56.198051  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198062  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198075  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198080  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198105  566681 addons.go:69] Setting volcano=true in profile "addons-535714"
	I1002 06:57:56.198115  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198118  566681 addons.go:238] Setting addon volcano=true in "addons-535714"
	I1002 06:57:56.198121  566681 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198148  566681 addons.go:69] Setting registry-creds=true in profile "addons-535714"
	I1002 06:57:56.198149  566681 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-535714"
	I1002 06:57:56.198007  566681 addons.go:238] Setting addon cloud-spanner=true in "addons-535714"
	I1002 06:57:56.197369  566681 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198159  566681 addons.go:238] Setting addon registry-creds=true in "addons-535714"
	I1002 06:57:56.197383  566681 addons.go:238] Setting addon registry=true in "addons-535714"
	I1002 06:57:56.198168  566681 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-535714"
	I1002 06:57:56.197305  566681 addons.go:69] Setting yakd=true in profile "addons-535714"
	I1002 06:57:56.198174  566681 addons.go:69] Setting storage-provisioner=true in profile "addons-535714"
	I1002 06:57:56.198182  566681 addons.go:238] Setting addon yakd=true in "addons-535714"
	I1002 06:57:56.198188  566681 addons.go:238] Setting addon storage-provisioner=true in "addons-535714"
	I1002 06:57:56.197356  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-535714"
	I1002 06:57:56.197990  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198337  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198362  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198371  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198392  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198402  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198453  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198563  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198685  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198716  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198796  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198823  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198872  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198882  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198903  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.199225  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199278  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199496  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199602  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199605  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.199635  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.200717  566681 out.go:179] * Verifying Kubernetes components...
	I1002 06:57:56.203661  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:56.205590  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205627  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.205734  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205767  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.207434  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.207479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.210405  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.210443  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.213438  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.213479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.214017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.214056  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.232071  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1002 06:57:56.233110  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.234209  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.234234  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.234937  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.236013  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.236165  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.237450  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I1002 06:57:56.239323  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I1002 06:57:56.239414  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I1002 06:57:56.240034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240196  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240748  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.240776  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.240868  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I1002 06:57:56.240881  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.241379  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.241396  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.241535  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.242519  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.242540  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.242696  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.242735  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.242850  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.243325  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1002 06:57:56.243893  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.243945  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.244617  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.244654  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.245057  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.245890  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.245907  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.246010  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I1002 06:57:56.246033  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I1002 06:57:56.246568  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.247024  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.247099  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.247133  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.247421  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I1002 06:57:56.247710  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.247729  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.248188  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.248445  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.249846  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.250467  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.251029  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251054  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.251579  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251601  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.252078  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.252654  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.252734  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.255593  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.255986  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.256022  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.257178  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.257900  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.257951  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.258275  566681 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-535714"
	I1002 06:57:56.259770  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.259874  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.260317  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.260360  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.260738  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.260770  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.261307  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.261989  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.262034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.263359  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I1002 06:57:56.263562  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I1002 06:57:56.264010  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.264539  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.264559  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.265015  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.265220  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.268199  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38901
	I1002 06:57:56.268835  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.269385  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.269407  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.269800  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.272103  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.272173  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.272820  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.274630  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1002 06:57:56.275810  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I1002 06:57:56.275999  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I1002 06:57:56.276099  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I1002 06:57:56.276317  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I1002 06:57:56.276957  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277804  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.277826  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.277935  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277992  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.279294  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.279318  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.279418  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279522  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I1002 06:57:56.279526  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.280424  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280801  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280956  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:57:56.280961  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281067  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281080  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281248  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281259  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281396  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.280977  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281804  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.281870  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.282274  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282869  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.282901  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282927  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.282975  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.283442  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.284009  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.284202  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.284751  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:56.284768  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:57:56.284787  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.284857  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.284890  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.285017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.285054  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.288207  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.289274  566681 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:57:56.289290  566681 addons.go:238] Setting addon default-storageclass=true in "addons-535714"
	I1002 06:57:56.289364  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.289753  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.289797  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290042  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1002 06:57:56.290151  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.290556  566681 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:56.290578  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.290579  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:57:56.290609  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290771  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290990  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.291089  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I1002 06:57:56.291362  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291376  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.291505  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291516  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.292055  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.293244  566681 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:57:56.294939  566681 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:56.294996  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:57:56.295277  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.296317  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.296363  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.296433  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I1002 06:57:56.297190  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.297368  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.300772  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.300866  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.300946  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.300966  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.300983  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301003  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301026  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.301076  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I1002 06:57:56.301165  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.301203  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.301228  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1002 06:57:56.301400  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.301411  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.301454  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.301467  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.303443  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.303720  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.303466  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.303491  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.303762  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.303770  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.303776  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303526  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.303632  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.304435  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.304932  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.305291  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.305345  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.305464  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.305492  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1002 06:57:56.305495  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.305508  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.305577  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.305592  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	W1002 06:57:56.305630  566681 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 06:57:56.306621  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307189  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.307311  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.307383  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.307409  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307505  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.307540  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.307955  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.307981  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.308071  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.308163  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.308587  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.309033  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309057  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309132  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309293  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.309302  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309314  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309372  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.309533  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309698  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309703  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.309839  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310208  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310523  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.311044  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.311749  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.313557  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.316426  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1002 06:57:56.319293  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I1002 06:57:56.319454  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.319564  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I1002 06:57:56.319675  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I1002 06:57:56.319683  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.319813  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.320386  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.320405  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.320695  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.320492  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.321204  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.321258  566681 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:57:56.321684  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.321443  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I1002 06:57:56.321593  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.321816  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.322144  566681 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:57:56.322156  566681 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:57:56.323037  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.323050  566681 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:56.323066  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:57:56.323087  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.323146  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.323323  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.323337  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.324564  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:56.324583  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:57:56.324603  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.324892  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325026  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.325041  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.325304  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1002 06:57:56.325602  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325730  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.325892  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326132  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326261  566681 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:57:56.327284  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.327472  566681 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:57:56.327597  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:57:56.327623  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.328569  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.328642  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.328661  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329119  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.329383  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.329634  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.329665  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329932  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.330003  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.331010  566681 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:57:56.331650  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332245  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:57:56.332277  566681 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:57:56.332261  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332297  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.332372  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.333369  566681 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:57:56.333621  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.333646  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.333810  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.334276  566681 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:57:56.334843  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.335194  566681 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:56.335210  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:57:56.335228  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.335446  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.335655  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I1002 06:57:56.335851  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.336132  566681 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:57:56.336170  566681 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:57:56.336280  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.336440  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1002 06:57:56.336618  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.337098  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.338250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.338315  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.338584  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.338676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.338709  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.338721  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.339313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.339382  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.339452  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.339507  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.340336  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.340677  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.340657  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341043  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.341796  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.341865  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342040  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342263  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342431  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342440  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342454  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342502  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342595  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342614  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342621  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342695  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.342072  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343379  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343750  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.343817  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.343832  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.344562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.344702  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344753  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.344946  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.345322  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.345404  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345404  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.345548  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.345606  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345806  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.346007  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.346320  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.346590  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.346862  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1002 06:57:56.347602  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.347914  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.348757  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.348800  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.349261  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.349633  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.349706  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.350337  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:57:56.351587  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:57:56.351643  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.351655  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:57:56.352903  566681 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:57:56.352987  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:57:56.353046  566681 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:57:56.353092  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.352987  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:57:56.353974  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I1002 06:57:56.354300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I1002 06:57:56.354530  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:57:56.354545  566681 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:57:56.354562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.354607  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355031  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355314  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355362  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355747  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.355869  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:57:56.355907  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355921  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355982  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.356446  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.356686  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.358485  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:57:56.359466  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.359801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360238  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360272  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.360643  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360654  566681 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:56.360667  566681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:57:56.360676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360684  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.360847  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.360902  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360949  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.361063  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361261  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.361264  566681 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:57:56.361278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.361264  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:57:56.361448  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.361531  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361713  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.362047  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.363668  566681 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:57:56.363670  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:57:56.364768  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365172  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:56.365189  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:57:56.365208  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.365463  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.365492  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365867  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.366200  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.366332  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:57:56.366394  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.366567  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.367647  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:57:56.367669  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:57:56.367689  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.369424  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370073  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.370181  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370353  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.370354  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1002 06:57:56.370539  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.370710  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.370855  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.371120  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.371862  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.371993  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.372440  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.372590  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.372646  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.373687  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.373711  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.373884  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.374060  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.374270  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.374438  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.374887  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.376513  566681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:57:56.377878  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:56.377895  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:57:56.377926  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.381301  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.381862  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.381898  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.382058  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.382245  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.382379  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.382525  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	W1002 06:57:56.611250  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611293  566681 retry.go:31] will retry after 268.923212ms: ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	W1002 06:57:56.611372  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611378  566681 retry.go:31] will retry after 284.79555ms: ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:57.238066  566681 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:57:57.238093  566681 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:57:57.274258  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:57.291447  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:57.296644  566681 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:57.296665  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:57:57.317724  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:57.326760  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:57:57.326790  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:57:57.344388  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:57.359635  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:57:57.359666  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:57:57.391219  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:57.397913  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:57.466213  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:57.539770  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:57:57.539800  566681 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:57:57.565073  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:57:57.565109  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:57:57.626622  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.42956155s)
	I1002 06:57:57.626664  566681 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.422968545s)
	I1002 06:57:57.626751  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:57.626829  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:57:57.788309  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:57:57.788340  566681 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:57:57.863163  566681 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:57.863190  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:57:57.896903  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:57:57.896955  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:57:57.923302  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:58.011690  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:58.012981  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:58.110306  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:57:58.110346  566681 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:57:58.142428  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:57:58.142456  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:57:58.216082  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:57:58.216112  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:57:58.218768  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:58.222643  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.222669  566681 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:57:58.429860  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:57:58.429897  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:57:58.485954  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:57:58.485995  566681 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:57:58.501916  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:57:58.501955  566681 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:57:58.521314  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.818318  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:57:58.818357  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:57:58.833980  566681 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:58.834010  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:57:58.873392  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:58.873431  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:57:59.176797  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:59.186761  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:57:59.186798  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:57:59.305759  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:59.719259  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:57:59.719285  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 06:58:00.188246  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:58:00.188281  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:58:00.481133  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.20682266s)
	I1002 06:58:00.481238  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.481605  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.481667  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.481693  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.481705  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481717  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.482053  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.482070  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.482081  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.644178  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:58:00.644209  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:58:01.086809  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:58:01.086834  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:58:01.452986  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:01.453026  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:58:02.150700  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:02.601667  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.310178549s)
	I1002 06:58:02.601725  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.28395893s)
	I1002 06:58:02.601734  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601747  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601765  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601795  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601869  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.25743101s)
	I1002 06:58:02.601905  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601924  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601917  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.210665802s)
	I1002 06:58:02.601951  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601961  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602046  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602055  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602062  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602178  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602365  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602381  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602379  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602385  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602399  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602401  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602410  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602351  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602416  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602424  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602390  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602460  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602330  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602552  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602560  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602566  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602767  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602847  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.603001  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603018  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602869  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602869  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603276  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:03.763895  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:58:03.763944  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:03.767733  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768302  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:03.768333  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768654  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:03.768868  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:03.769064  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:03.769213  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:04.277228  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:58:04.505226  566681 addons.go:238] Setting addon gcp-auth=true in "addons-535714"
	I1002 06:58:04.505305  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:58:04.505781  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.505848  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.521300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1002 06:58:04.521841  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.522464  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.522494  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.522889  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.523576  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.523636  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.537716  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1002 06:58:04.538258  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.538728  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.538756  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.539153  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.539385  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:58:04.541614  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:58:04.541849  566681 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:58:04.541880  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:04.545872  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546401  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:04.546429  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546708  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:04.546895  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:04.547027  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:04.547194  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:05.770941  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.372950609s)
	I1002 06:58:05.771023  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771039  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771065  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.304816797s)
	I1002 06:58:05.771113  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771131  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771178  566681 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.1443973s)
	I1002 06:58:05.771222  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.144363906s)
	I1002 06:58:05.771258  566681 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 06:58:05.771308  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.847977896s)
	W1002 06:58:05.771333  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771355  566681 retry.go:31] will retry after 297.892327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771456  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758443398s)
	I1002 06:58:05.771481  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771490  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771540  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.759815099s)
	I1002 06:58:05.771573  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771575  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.552784974s)
	I1002 06:58:05.771584  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771595  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771611  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771719  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.250362363s)
	I1002 06:58:05.771747  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771759  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771942  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.771963  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772013  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772022  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772032  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772040  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772044  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772052  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772059  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772194  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772224  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772230  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772248  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772485  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772523  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772532  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772541  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772549  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772589  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772628  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772636  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772645  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772653  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772709  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772796  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773193  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.773210  566681 addons.go:479] Verifying addon registry=true in "addons-535714"
	I1002 06:58:05.773744  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.773810  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773834  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774038  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.774118  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774129  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772818  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772841  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774925  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774937  566681 addons.go:479] Verifying addon ingress=true in "addons-535714"
	I1002 06:58:05.772862  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775004  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775017  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.775024  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772880  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775347  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775380  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775386  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775394  566681 addons.go:479] Verifying addon metrics-server=true in "addons-535714"
	I1002 06:58:05.776348  566681 node_ready.go:35] waiting up to 6m0s for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.776980  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.776996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.776998  566681 out.go:179] * Verifying registry addon...
	I1002 06:58:05.779968  566681 out.go:179] * Verifying ingress addon...
	I1002 06:58:05.780767  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:58:05.782010  566681 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:58:05.829095  566681 node_ready.go:49] node "addons-535714" is "Ready"
	I1002 06:58:05.829146  566681 node_ready.go:38] duration metric: took 52.75602ms for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.829168  566681 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:58:05.829233  566681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:58:05.834443  566681 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:58:05.834466  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:05.835080  566681 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:58:05.835100  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:05.875341  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.875368  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.875751  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.875763  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.875778  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:58:05.875878  566681 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 06:58:05.909868  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.909898  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.910207  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.910270  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.910287  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.069811  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:06.216033  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.039174172s)
	W1002 06:58:06.216104  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216108  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.910297192s)
	I1002 06:58:06.216150  566681 retry.go:31] will retry after 161.340324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216192  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216210  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216504  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216542  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216549  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.216557  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216563  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216800  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216843  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.218514  566681 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-535714 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:58:06.294875  566681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-535714" context rescaled to 1 replicas
	I1002 06:58:06.324438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:06.327459  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.377937  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:58:06.794270  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.798170  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.296006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.297921  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.825812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.825866  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.904551  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.753782282s)
	I1002 06:58:07.904616  566681 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.362740219s)
	I1002 06:58:07.904661  566681 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.075410022s)
	I1002 06:58:07.904685  566681 api_server.go:72] duration metric: took 11.707614799s to wait for apiserver process to appear ...
	I1002 06:58:07.904692  566681 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:58:07.904618  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.904714  566681 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I1002 06:58:07.904746  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905650  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.905668  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.905673  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:07.905682  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.905697  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905988  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.906010  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.906023  566681 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:58:07.917720  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:58:07.917721  566681 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:58:07.919394  566681 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:58:07.920319  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:58:07.920611  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:58:07.920631  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:58:07.923712  566681 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I1002 06:58:07.935689  566681 api_server.go:141] control plane version: v1.34.1
	I1002 06:58:07.935726  566681 api_server.go:131] duration metric: took 31.026039ms to wait for apiserver health ...
	I1002 06:58:07.935739  566681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:58:07.938642  566681 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:58:07.938662  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:07.962863  566681 system_pods.go:59] 20 kube-system pods found
	I1002 06:58:07.962924  566681 system_pods.go:61] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:07.962934  566681 system_pods.go:61] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962944  566681 system_pods.go:61] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962951  566681 system_pods.go:61] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:07.962955  566681 system_pods.go:61] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:07.962959  566681 system_pods.go:61] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending
	I1002 06:58:07.962962  566681 system_pods.go:61] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:07.962965  566681 system_pods.go:61] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:07.962968  566681 system_pods.go:61] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:07.962973  566681 system_pods.go:61] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:07.962979  566681 system_pods.go:61] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:07.962983  566681 system_pods.go:61] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:07.962988  566681 system_pods.go:61] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:07.962994  566681 system_pods.go:61] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:07.962999  566681 system_pods.go:61] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:07.963005  566681 system_pods.go:61] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:07.963017  566681 system_pods.go:61] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:07.963022  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963031  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963036  566681 system_pods.go:61] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:07.963048  566681 system_pods.go:74] duration metric: took 27.298515ms to wait for pod list to return data ...
	I1002 06:58:07.963061  566681 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:58:07.979696  566681 default_sa.go:45] found service account: "default"
	I1002 06:58:07.979723  566681 default_sa.go:55] duration metric: took 16.655591ms for default service account to be created ...
	I1002 06:58:07.979733  566681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:58:08.050371  566681 system_pods.go:86] 20 kube-system pods found
	I1002 06:58:08.050407  566681 system_pods.go:89] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:08.050415  566681 system_pods.go:89] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050424  566681 system_pods.go:89] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050430  566681 system_pods.go:89] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:08.050438  566681 system_pods.go:89] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:08.050443  566681 system_pods.go:89] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:58:08.050449  566681 system_pods.go:89] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:08.050456  566681 system_pods.go:89] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:08.050463  566681 system_pods.go:89] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:08.050472  566681 system_pods.go:89] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:08.050477  566681 system_pods.go:89] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:08.050485  566681 system_pods.go:89] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:08.050493  566681 system_pods.go:89] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:08.050504  566681 system_pods.go:89] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:08.050512  566681 system_pods.go:89] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:08.050523  566681 system_pods.go:89] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:08.050528  566681 system_pods.go:89] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:08.050537  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050542  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050551  566681 system_pods.go:89] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:08.050567  566681 system_pods.go:126] duration metric: took 70.827007ms to wait for k8s-apps to be running ...
	I1002 06:58:08.050583  566681 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:58:08.050638  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:08.169874  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:58:08.169907  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:58:08.289577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.292025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.296361  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.296391  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:58:08.432642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:08.459596  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.795545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.796983  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.947651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.295174  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.296291  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.426575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.794891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.794937  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.929559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.288382  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.293181  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.428326  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.511821  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.441960114s)
	W1002 06:58:10.511871  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511903  566681 retry.go:31] will retry after 394.105371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511999  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.133998235s)
	I1002 06:58:10.512065  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512084  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512009  566681 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.461351775s)
	I1002 06:58:10.512151  566681 system_svc.go:56] duration metric: took 2.461548607s WaitForService to wait for kubelet
	I1002 06:58:10.512170  566681 kubeadm.go:586] duration metric: took 14.315097833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:58:10.512195  566681 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:58:10.512421  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512436  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.512445  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512451  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512808  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512831  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.525421  566681 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 06:58:10.525467  566681 node_conditions.go:123] node cpu capacity is 2
	I1002 06:58:10.525483  566681 node_conditions.go:105] duration metric: took 13.282233ms to run NodePressure ...
	I1002 06:58:10.525500  566681 start.go:241] waiting for startup goroutines ...
	I1002 06:58:10.876948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.878962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.907099  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:10.933831  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.474178987s)
	I1002 06:58:10.933902  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.933917  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934327  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934351  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.934363  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.934372  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934718  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934741  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.936073  566681 addons.go:479] Verifying addon gcp-auth=true in "addons-535714"
	I1002 06:58:10.939294  566681 out.go:179] * Verifying gcp-auth addon...
	I1002 06:58:10.941498  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:58:10.967193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.967643  566681 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:58:10.967661  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.291995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.292859  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.426822  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.449596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.787220  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.790007  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.927177  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.946352  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.291330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.291893  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.412988  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.505843996s)
	W1002 06:58:12.413060  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.413088  566681 retry.go:31] will retry after 830.72209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.425033  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.449434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.790923  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.792837  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.929132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.949344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.244514  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:13.289311  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.291334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.429008  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.453075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.786448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.787372  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.926128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.944808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.290787  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.291973  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.426597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.446124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.495404  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.250841467s)
	W1002 06:58:14.495476  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.495515  566681 retry.go:31] will retry after 993.52867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.787133  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.787363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.925480  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.947120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.288745  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.290247  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.426491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.446707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.489998  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:15.790203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.790718  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.926338  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.947762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.288050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.294216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.426315  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.448623  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.749674  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.259622296s)
	W1002 06:58:16.749739  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.749766  566681 retry.go:31] will retry after 685.893269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.784937  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.789418  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.924303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.945254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.286582  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.289258  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.429493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.436551  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:17.446130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.789304  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.789354  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.927192  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.947272  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.287684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.287964  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.425334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.446542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.793984  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.357370737s)
	W1002 06:58:18.794035  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.794058  566681 retry.go:31] will retry after 1.769505645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.818834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.819319  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.926250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.946166  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.286120  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.287299  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.427368  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.446296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.788860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.790575  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.926266  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.946838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.285631  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.286287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.426458  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.448700  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.563743  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:20.784983  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.792452  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.928439  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.946213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.354534  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.355101  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.424438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.447780  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.787792  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.788239  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.926313  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.946909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.986148  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.422343909s)
	W1002 06:58:21.986215  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:21.986241  566681 retry.go:31] will retry after 1.591159568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:22.479105  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.490010  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.490062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:22.490154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.785438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.785505  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.924097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.945260  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.287691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.288324  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.424675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.444770  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.578011  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:23.942123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.948294  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.948453  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.950791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.287641  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.287755  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.427062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.445753  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.646106  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.068053257s)
	W1002 06:58:24.646165  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.646192  566681 retry.go:31] will retry after 2.605552754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.785021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.786706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.924880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.945307  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.293097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.295253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.426401  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.448785  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.786965  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.789832  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.926383  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.947419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.286815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.424942  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.444763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.788540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.788706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.945896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.252378  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:27.285347  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.286330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.426765  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.444675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.783930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.785939  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.925152  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.946794  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:27.992201  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:27.992240  566681 retry.go:31] will retry after 8.383284602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:28.292474  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.293236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.449878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:28.785825  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.786277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.930557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.944934  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.288741  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.289425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.425596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.448825  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.791293  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.791772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.925493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.947040  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.289093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.289274  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.429043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.445086  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.787343  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.788106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.925916  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.945578  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.287772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.288130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.424173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.444911  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.839251  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.839613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.924537  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.945244  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.285593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.287197  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.428173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.445646  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.790722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.792545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.924044  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.948465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.287477  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.287815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.426173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.445002  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.789091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.789248  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.926672  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.945340  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.287879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.291550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.424476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.446160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.790769  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.793072  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.924896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.945667  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.523723  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.524500  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.524737  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.525162  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.790230  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.791831  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.924241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.944951  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.289627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.289977  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.375684  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:36.425592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.451074  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.785903  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.787679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.925288  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.947999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.311635  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.311959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.426029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.446091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.636801  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.261070571s)
	W1002 06:58:37.636852  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.636877  566681 retry.go:31] will retry after 12.088306464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.784365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.786077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.924729  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.287422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.288052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.424776  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.446043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.787364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.788336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.929977  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.952669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.285777  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.286130  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.425664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.445359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.791043  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.792332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.927261  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.949133  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.297847  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.298155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.508411  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.508530  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.790869  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.791640  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.926541  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.946409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.284335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.288282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.425342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.445476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.786456  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.787369  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.925788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.285122  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.289954  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.427812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.448669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.789086  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.793784  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.981476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.983793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.287301  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.287653  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.425089  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.446115  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.788762  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.788804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.925841  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.946154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.291446  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.291561  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.424642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.445497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.784807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.785666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.924223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.945793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.287330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.288804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.425720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.445387  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.784761  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.787219  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.925198  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.945101  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.287324  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.287453  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.425817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.785000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.786016  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.924786  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.944720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.284615  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.286350  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.424772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.444696  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.784801  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.786247  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.924675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.945863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.285254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.286071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.424850  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.444546  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.784736  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.787062  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.924609  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.285611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.286827  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.424821  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.444716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.726164  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:49.787775  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.787812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.924332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.945915  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:50.285693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.287323  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.425093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:50.457717  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.457755  566681 retry.go:31] will retry after 14.401076568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.785374  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.786592  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.924494  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.946113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.285309  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.424519  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.446358  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.785764  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.787620  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.924671  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.945518  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.284608  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.286328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.426252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.444955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.785415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.786501  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.924360  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.945603  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.286059  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.286081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.426061  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.445434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.784563  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.787018  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.926712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.285670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.286270  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.425263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.445015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.783971  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.785518  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.924652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.944701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.284095  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.285982  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.425045  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.445159  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.784789  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.785811  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.925024  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.945670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.284935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.286230  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.424865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.444979  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.784010  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.785095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.925082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.945267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.285037  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.290841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.423992  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.444492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.785708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.786647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.944543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.284397  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.286589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.424263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.446278  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.784592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.786223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.925275  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.945639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.284167  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.286213  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.424554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.446331  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.786351  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.786532  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.924799  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.944552  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.284593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.286147  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.427708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.446640  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.783993  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.786195  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.925109  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.945645  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.284268  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.286567  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.425880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.444926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.784751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.786669  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.924082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.945409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.285484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.287955  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.424588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.445328  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.785933  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.786611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.924311  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.945554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.284664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.286758  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.424558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.445443  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.785718  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.786015  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.924950  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.945320  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.285692  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.287456  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.423909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.445028  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.784417  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.785847  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.859977  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:04.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.944867  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:05.286410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.424815  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.444725  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:59:05.565727  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.565775  566681 retry.go:31] will retry after 12.962063584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.784083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.785399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.924301  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.945548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.284341  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.285025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.424577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.445930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.785592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.785777  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.924651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.944548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.284807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.286980  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.424593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.445604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.785681  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.786565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.924412  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.945298  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.284890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.285768  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.424422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.446875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.784632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.786747  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.924452  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.284701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.286699  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.424832  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.445005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.785080  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.923720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.944468  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.285848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.285877  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.425574  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.445229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.785800  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.788069  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.924958  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.945132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.284817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.286986  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.424693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.444335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.786755  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.788412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.924402  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.944935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.285499  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.285734  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.424709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.445959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.785549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.788041  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.924691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.944292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.285683  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.424754  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.445585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.784745  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.786053  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.925403  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.945860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.285184  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.286959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.424804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.446097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.791558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.791556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.927542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.949956  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.284639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.286617  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.426580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.446175  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.784496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.787071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.925830  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.945618  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.286160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:16.287392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.424973  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.446497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.789545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.790116  566681 kapi.go:107] duration metric: took 1m11.009348953s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 06:59:16.925187  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.947267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.287647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.426165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:17.450844  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.786988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.928406  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.027597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.293020  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.429378  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.449227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.528488  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:18.796448  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.929553  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.946292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.288404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.429199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.452666  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.792639  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.864991  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.336449949s)
	W1002 06:59:19.865069  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:19.865160  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865179  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865566  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:59:19.865575  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:59:19.865834  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:59:19.865969  566681 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:59:19.924481  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.945058  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.286730  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.424767  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.445496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.787056  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.925303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.945594  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.285610  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.424114  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.445438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.786589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.924253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.944783  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.285375  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.424724  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.445811  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.828328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.929492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.945629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.286455  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.424116  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.444871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.785953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.924350  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.945321  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.286907  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.424613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.445706  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.786265  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.925165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.944432  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.286899  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.424337  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.445373  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.786646  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.924121  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.944695  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.286707  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.425250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.445323  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.786287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.945489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.286403  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.424957  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.445376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.786820  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.924170  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.945197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.424241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.445542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.786419  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.945740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.286366  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.425536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.445687  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.788123  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.944760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.285395  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.425015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.445071  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.786362  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.925693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.945540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.286268  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.424296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.446123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.786155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.926684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.945375  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.286413  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.424180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.444838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.786253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.925151  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.944944  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.288748  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.425620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.445650  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.786358  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.924738  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.944757  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.285092  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.424998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.445067  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.786516  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.924306  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.945543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.286428  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.423533  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.445039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.785517  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.944555  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.286468  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.424778  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.444808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.785451  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.924018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.287660  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.424005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.445419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.785743  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.924870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.944575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.286370  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.424689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.444639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.786644  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.928760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.945529  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.286055  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.425011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.787058  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.924829  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.944865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.285681  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.424212  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.445570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.786536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.924039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.945611  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.286872  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.425081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.445160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.785854  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.924803  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.945395  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.286806  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.424531  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.445213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.785794  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.924199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.946416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.287223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.425005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.445179  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.786152  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.945545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.286313  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.425004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.445925  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.786682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.944902  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.286167  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.424932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.444879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.785378  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.925864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.945123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.286422  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.424954  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.445018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.786489  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.924425  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.945064  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.286244  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.425181  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.445110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.785417  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.923870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.944712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.287782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.424751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.444542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.786556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.924410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.945514  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.286856  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.424634  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.444823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.786341  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.925249  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.945585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.287532  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.427364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.449565  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.926679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.947416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.289682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.428232  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.445465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.787537  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.926415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.945253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.285757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.424433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.448251  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.785971  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.928422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.946461  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.286536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.452271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.786128  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.926032  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.946426  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.287601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.424345  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.445705  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.787096  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.924759  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.946688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.290180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.519704  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.519891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.926689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.946557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.286054  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.425914  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.447300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.785957  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.924030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.949871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.291565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.428120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.526092  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.786283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.933203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.952823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.290757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:58.425788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:58.445898  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.785286  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.135410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.135484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.289658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.424763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.444901  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.789990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.927768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.950570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.288666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.424489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.444995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.785712  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.928193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.945797  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.289874  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.429342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.447102  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.787399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.924633  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.944955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.288296  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.432709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.448119  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.788304  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.936551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.950283  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.291180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.429826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.446896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.789649  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.930297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.285728  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.423878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.445021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.785989  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.926604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.946365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.289629  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.424560  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.446580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.786184  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.925038  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.286414  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.425072  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.445415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.786235  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.924932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.945108  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.286318  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.425639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.445791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.787192  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.945680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.286388  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.424699  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.445180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.786177  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.927180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.945006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.285412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.424690  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.444685  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.787988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.926782  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.944680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.286385  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.425422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.445890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.785391  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.925292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.946110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.286953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.424926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.445097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.785990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.925536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.945882  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.286095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.426218  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.445400  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.787180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.924959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.945605  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.424843  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.445297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.786034  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.927087  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.945676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.286216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.424888  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.444768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.785283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.925300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.945536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.287658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.424359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.445282  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.785834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.924384  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.945604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.286392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.424670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.445327  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.786482  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.924913  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.944676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.286962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.428554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.445872  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.787125  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.924730  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.945508  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.286528  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.426864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.786434  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.926688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.945265  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.286255  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.425491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.446113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.925826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.946549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.286336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.424707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.444772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.785404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.925678  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.945252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.425487  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.447029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.786550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.945389  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.288156  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.425586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.446602  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.787696  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.924004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.286521  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.424493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.446224  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.786604  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.925118  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.945482  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.424632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.445848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.785791  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.927001  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.944907  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.288049  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.424875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.444559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.786767  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.925226  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.945050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.285958  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.426083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.444740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.787052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.945062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.285717  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.424050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.444966  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.787841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.924740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.945492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.286484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.424236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.445504  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.786601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.924551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.945948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.288423  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.424871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.445286  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.786695  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.926223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.945407  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.286021  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.425588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.445469  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.786883  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.926085  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.945814  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.287360  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.424981  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.445361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.787680  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.924556  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.945363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.288077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.425366  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.447433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.847272  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.946629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.946982  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.285658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.424106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.445538  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.787044  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.927886  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.944580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.290469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.425444  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.448620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.789282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.930009  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.948721  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.287469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.432852  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.446652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.788507  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.930180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.954772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.293484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.435262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.449271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.788843  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.928945  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.288443  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.427657  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.447716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.787995  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.933694  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.946106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.287636  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.427229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.446000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.788221  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.925863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.944669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:39.286808  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:39.425719  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:39.446011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.005533  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.011858  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.013227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.289216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.429330  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.446597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.788887  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.934361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.949590  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.288436  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.426586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.446712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.790082  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.926762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.948030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.286904  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.428171  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.447262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.787879  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.928999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.947900  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.289340  566681 kapi.go:107] duration metric: took 2m37.507327929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 07:00:43.426593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.445627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.927030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.946124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.426277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.445511  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.928128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.945892  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.424940  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.445245  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.925479  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.948084  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:46.427998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:46.446348  566681 kapi.go:107] duration metric: took 2m35.504841728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 07:00:46.448361  566681 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-535714 cluster.
	I1002 07:00:46.449772  566681 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 07:00:46.451121  566681 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 07:00:46.925947  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.429007  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.927793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.430587  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.930344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.428197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.928448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:50.425299  566681 kapi.go:107] duration metric: took 2m42.504972928s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 07:00:50.428467  566681 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, metrics-server, storage-provisioner, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 07:00:50.429978  566681 addons.go:514] duration metric: took 2m54.232824958s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin amd-gpu-device-plugin registry-creds metrics-server storage-provisioner storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 07:00:50.430050  566681 start.go:246] waiting for cluster config update ...
	I1002 07:00:50.430076  566681 start.go:255] writing updated cluster config ...
	I1002 07:00:50.430525  566681 ssh_runner.go:195] Run: rm -f paused
	I1002 07:00:50.439887  566681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:50.446240  566681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.451545  566681 pod_ready.go:94] pod "coredns-66bc5c9577-w7hjm" is "Ready"
	I1002 07:00:50.451589  566681 pod_ready.go:86] duration metric: took 5.295665ms for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.454257  566681 pod_ready.go:83] waiting for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.459251  566681 pod_ready.go:94] pod "etcd-addons-535714" is "Ready"
	I1002 07:00:50.459291  566681 pod_ready.go:86] duration metric: took 4.998226ms for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.463385  566681 pod_ready.go:83] waiting for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.473863  566681 pod_ready.go:94] pod "kube-apiserver-addons-535714" is "Ready"
	I1002 07:00:50.473899  566681 pod_ready.go:86] duration metric: took 10.481477ms for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.478391  566681 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.845519  566681 pod_ready.go:94] pod "kube-controller-manager-addons-535714" is "Ready"
	I1002 07:00:50.845556  566681 pod_ready.go:86] duration metric: took 367.127625ms for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.046035  566681 pod_ready.go:83] waiting for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.445054  566681 pod_ready.go:94] pod "kube-proxy-z495t" is "Ready"
	I1002 07:00:51.445095  566681 pod_ready.go:86] duration metric: took 399.024039ms for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.644949  566681 pod_ready.go:83] waiting for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045721  566681 pod_ready.go:94] pod "kube-scheduler-addons-535714" is "Ready"
	I1002 07:00:52.045756  566681 pod_ready.go:86] duration metric: took 400.769133ms for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045769  566681 pod_ready.go:40] duration metric: took 1.605821704s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:52.107681  566681 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 07:00:52.109482  566681 out.go:179] * Done! kubectl is now configured to use "addons-535714" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.080258056Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=812ab3cf-58d6-43d0-8be2-31db6707fbfd name=/runtime.v1.RuntimeService/ExecSync
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.081025498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d24fe1c-bbe0-4152-bdea-81d30d85b1c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.081151994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d24fe1c-bbe0-4152-bdea-81d30d85b1c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.081792955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b16f4fdfc55bf9744d3e067e49727ba71fff16d6ad19b7413ae82a67395df1a,PodSandboxId:448270914f28f204592d58d83f57d17d600754b9c02ac401ec9a2042c22d13e0,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1759388542280268044,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d9e839-b6d4-4982-9fb1-a58db70a15c8,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107243563c5065097129259be0a3eddd75c72733382dd062a348c02bf08ab9ee,PodSandboxId:eb3be12cdfb85aed3c9deb071d9f8dc21513fbbe46bdd07c70a7438da0525e36,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1759388539996810234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-h7d97,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 77fe7a55-11e4-4227-a109-40f35a78ecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903
,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-
jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fd
ad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:
&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703de
a619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSand
boxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73
147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d62b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.
hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[
string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: l
ocal-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f13bb65dc0af2ddf265b66b7cd00d3c4a6a885f62798ec8ba26c828e3b65e5,PodSandboxId:33d2cbb30e57bbc56c8b23ce7a0fa62a7e829ecccdbde92405974e5504d808c1,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3c52eedeec804bef2771a5ea8871d31f61d61129050469324ccb8a51890cbe16,State:CONTAINER_RUNNING,CreatedAt:1759388355875034729,Labels:map[string]string{io.kubernetes.container.name: registr
y,io.kubernetes.pod.name: registry-66898fdd98-rc8tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 664b0bff-06c4-43b6-8e54-2664c0dcad56,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff59d20,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c851b0e54da5cadf927f09f74c110760c6ff9e0269420cad41aaa4884a6e74,PodSandboxId:560f675bdc8de7f3408f2e4aeac5f6698708f5f436fd8da8469e76362714eba3,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAI
NER_RUNNING,CreatedAt:1759388324146342643,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-d9npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542f8fb1-6b0c-47b2-89ff-4dc935710130,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9
640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"nam
e\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c39
94bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d24fe1c-bbe0-4152-bdea-81d30d85b1c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.126784906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c786aab-d570-4755-98e2-8c0996e4801f name=/runtime.v1.RuntimeService/Version
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.126860232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c786aab-d570-4755-98e2-8c0996e4801f name=/runtime.v1.RuntimeService/Version
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.128566341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0b068f5-95dd-40ba-ace7-fd6b7e52a354 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.129975072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388544129943767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0b068f5-95dd-40ba-ace7-fd6b7e52a354 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.130685536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d096d83b-d366-4f3d-9eb1-70df69ddf220 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.130766110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d096d83b-d366-4f3d-9eb1-70df69ddf220 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.131535985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b16f4fdfc55bf9744d3e067e49727ba71fff16d6ad19b7413ae82a67395df1a,PodSandboxId:448270914f28f204592d58d83f57d17d600754b9c02ac401ec9a2042c22d13e0,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1759388542280268044,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d9e839-b6d4-4982-9fb1-a58db70a15c8,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107243563c5065097129259be0a3eddd75c72733382dd062a348c02bf08ab9ee,PodSandboxId:eb3be12cdfb85aed3c9deb071d9f8dc21513fbbe46bdd07c70a7438da0525e36,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1759388539996810234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-h7d97,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 77fe7a55-11e4-4227-a109-40f35a78ecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903
,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-
jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fd
ad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:
&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703de
a619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSand
boxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73
147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d62b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.
hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[
string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: l
ocal-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f13bb65dc0af2ddf265b66b7cd00d3c4a6a885f62798ec8ba26c828e3b65e5,PodSandboxId:33d2cbb30e57bbc56c8b23ce7a0fa62a7e829ecccdbde92405974e5504d808c1,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3c52eedeec804bef2771a5ea8871d31f61d61129050469324ccb8a51890cbe16,State:CONTAINER_RUNNING,CreatedAt:1759388355875034729,Labels:map[string]string{io.kubernetes.container.name: registr
y,io.kubernetes.pod.name: registry-66898fdd98-rc8tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 664b0bff-06c4-43b6-8e54-2664c0dcad56,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff59d20,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c851b0e54da5cadf927f09f74c110760c6ff9e0269420cad41aaa4884a6e74,PodSandboxId:560f675bdc8de7f3408f2e4aeac5f6698708f5f436fd8da8469e76362714eba3,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAI
NER_RUNNING,CreatedAt:1759388324146342643,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-d9npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542f8fb1-6b0c-47b2-89ff-4dc935710130,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9
640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"nam
e\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c39
94bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d096d83b-d366-4f3d-9eb1-70df69ddf220 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.169963450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43720b7b-80f6-4176-ba23-0fa46e52426b name=/runtime.v1.RuntimeService/Version
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.170263294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43720b7b-80f6-4176-ba23-0fa46e52426b name=/runtime.v1.RuntimeService/Version
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.171837966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=521618d9-79fc-4983-8fa8-c1240e5c6fe3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.174613097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388544174584917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=521618d9-79fc-4983-8fa8-c1240e5c6fe3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.175572215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d08142a5-027e-4145-9211-c636ef3364d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.175630449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d08142a5-027e-4145-9211-c636ef3364d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.176369576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b16f4fdfc55bf9744d3e067e49727ba71fff16d6ad19b7413ae82a67395df1a,PodSandboxId:448270914f28f204592d58d83f57d17d600754b9c02ac401ec9a2042c22d13e0,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1759388542280268044,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d9e839-b6d4-4982-9fb1-a58db70a15c8,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107243563c5065097129259be0a3eddd75c72733382dd062a348c02bf08ab9ee,PodSandboxId:eb3be12cdfb85aed3c9deb071d9f8dc21513fbbe46bdd07c70a7438da0525e36,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1759388539996810234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-h7d97,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 77fe7a55-11e4-4227-a109-40f35a78ecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903
,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-
jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fd
ad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:
&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703de
a619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSand
boxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73
147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d62b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.
hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[
string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: l
ocal-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f13bb65dc0af2ddf265b66b7cd00d3c4a6a885f62798ec8ba26c828e3b65e5,PodSandboxId:33d2cbb30e57bbc56c8b23ce7a0fa62a7e829ecccdbde92405974e5504d808c1,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3c52eedeec804bef2771a5ea8871d31f61d61129050469324ccb8a51890cbe16,State:CONTAINER_RUNNING,CreatedAt:1759388355875034729,Labels:map[string]string{io.kubernetes.container.name: registr
y,io.kubernetes.pod.name: registry-66898fdd98-rc8tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 664b0bff-06c4-43b6-8e54-2664c0dcad56,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff59d20,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c851b0e54da5cadf927f09f74c110760c6ff9e0269420cad41aaa4884a6e74,PodSandboxId:560f675bdc8de7f3408f2e4aeac5f6698708f5f436fd8da8469e76362714eba3,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAI
NER_RUNNING,CreatedAt:1759388324146342643,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-d9npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542f8fb1-6b0c-47b2-89ff-4dc935710130,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9
640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"nam
e\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c39
94bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d08142a5-027e-4145-9211-c636ef3364d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.219678289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c476d4df-c1fe-445c-bc78-0d1a1bea64e9 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.219762286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c476d4df-c1fe-445c-bc78-0d1a1bea64e9 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.221849538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9b14aa9-d856-44a6-a230-39b27d223537 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.223380090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388544223349184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9b14aa9-d856-44a6-a230-39b27d223537 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.224385931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=145bc84d-8e41-45a5-8b30-d2e49ed54643 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.224477147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=145bc84d-8e41-45a5-8b30-d2e49ed54643 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:02:24 addons-535714 crio[827]: time="2025-10-02 07:02:24.225239366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b16f4fdfc55bf9744d3e067e49727ba71fff16d6ad19b7413ae82a67395df1a,PodSandboxId:448270914f28f204592d58d83f57d17d600754b9c02ac401ec9a2042c22d13e0,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1759388542280268044,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d9e839-b6d4-4982-9fb1-a58db70a15c8,},Annotations:map[string]string{io.kubernetes.container.hash: b38ca3e1,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:107243563c5065097129259be0a3eddd75c72733382dd062a348c02bf08ab9ee,PodSandboxId:eb3be12cdfb85aed3c9deb071d9f8dc21513fbbe46bdd07c70a7438da0525e36,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3bd49f1f42c46a0def0e208037f718f7a122902ffa846fd7c2757e017c1ee29e,State:CONTAINER_RUNNING,CreatedAt:1759388539996810234,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-85f8f8dc54-h7d97,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 77fe7a55-11e4-4227-a109-40f35a78ecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 22a1aabb,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903
,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-
jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fd
ad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:
&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703de
a619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSand
boxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73
147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d62b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.
hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[
string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: l
ocal-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8f13bb65dc0af2ddf265b66b7cd00d3c4a6a885f62798ec8ba26c828e3b65e5,PodSandboxId:33d2cbb30e57bbc56c8b23ce7a0fa62a7e829ecccdbde92405974e5504d808c1,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3c52eedeec804bef2771a5ea8871d31f61d61129050469324ccb8a51890cbe16,State:CONTAINER_RUNNING,CreatedAt:1759388355875034729,Labels:map[string]string{io.kubernetes.container.name: registr
y,io.kubernetes.pod.name: registry-66898fdd98-rc8tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 664b0bff-06c4-43b6-8e54-2664c0dcad56,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff59d20,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c851b0e54da5cadf927f09f74c110760c6ff9e0269420cad41aaa4884a6e74,PodSandboxId:560f675bdc8de7f3408f2e4aeac5f6698708f5f436fd8da8469e76362714eba3,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAI
NER_RUNNING,CreatedAt:1759388324146342643,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-d9npj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 542f8fb1-6b0c-47b2-89ff-4dc935710130,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image
:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9
640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"nam
e\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:m
ap[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c39
94bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=145bc84d-8e41-45a5-8b30-d2e49ed54643 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	6b16f4fdfc55b       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          2 seconds ago        Exited              registry-test                            0                   448270914f28f       registry-test
	107243563c506       ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39                                        4 seconds ago        Running             headlamp                                 0                   eb3be12cdfb85       headlamp-85f8f8dc54-h7d97
	86667c9385b67       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          About a minute ago   Running             busybox                                  0                   2fe38d26ed81e       busybox
	6e1593fcd2d1f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f4c3a8c0ea5c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	65d9fdba36a17       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   e2277305f110b       csi-hostpathplugin-8sjk8
	81f190fa89d8e       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             About a minute ago   Running             controller                               0                   2c0a4b75d16bb       ingress-nginx-controller-9cc49f96f-jcwrw
	0683a8b55d03d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3456f5ab4e9db       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f6808e1f9304       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago        Running             csi-resizer                              0                   dabf0b0e1eb70       csi-hostpath-resizer-0
	24139e6a7a8b1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago        Running             csi-attacher                             0                   e2ed9baa384a5       csi-hostpath-attacher-0
	46de36d65127e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago        Running             csi-external-health-monitor-controller   0                   e2277305f110b       csi-hostpathplugin-8sjk8
	98d5407fe4705       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago        Running             volume-snapshot-controller               0                   e2ad15837b991       snapshot-controller-7d9fbc56b8-g4hd4
	ea44a6e53635f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago        Running             volume-snapshot-controller               0                   bbec6993c46f7       snapshot-controller-7d9fbc56b8-knwl8
	2f84e33ebf14f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   2 minutes ago        Exited              patch                                    0                   45c7f94d02bfb       ingress-nginx-admission-patch-46z2n
	5ce0b3e6c8fef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   2 minutes ago        Exited              create                                   0                   13a0722f22fb7       ingress-nginx-admission-create-jsw7z
	d20e001ce5fa7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            2 minutes ago        Running             gadget                                   0                   53cbb87b563ff       gadget-2hn79
	b1d2fad243c3b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   99eafaf0bf06b       local-path-provisioner-648f6765c9-gf62q
	e8f13bb65dc0a       docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d                                           3 minutes ago        Running             registry                                 0                   33d2cbb30e57b       registry-66898fdd98-rc8tq
	82c851b0e54da       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   560f675bdc8de       registry-proxy-d9npj
	c68a602009da4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   1239599eb3508       kube-ingress-dns-minikube
	75992e0dff6a5       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               3 minutes ago        Running             cloud-spanner-emulator                   0                   0863b64ffcb34       cloud-spanner-emulator-85f6b7fc65-hh72s
	0f29426982799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   348af25e84579       storage-provisioner
	58aa192645e96       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago        Running             amd-gpu-device-plugin                    0                   dba3c49629455       amd-gpu-device-plugin-f7qcs
	6e31cb36c4500       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   4fcabfc373e60       coredns-66bc5c9577-w7hjm
	fb130499febb3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   646600c8d86f7       kube-proxy-z495t
	466837c8cdfcc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   c7d4e0eb984a2       etcd-addons-535714
	da8295539fc0e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   36d2846a22a84       kube-scheduler-addons-535714
	da58df3cad660       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   63f4cb9d3437a       kube-controller-manager-addons-535714
	deaf436584a26       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   35f49d5f3b8fb       kube-apiserver-addons-535714
	
	
	==> coredns [6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb] <==
	[INFO] 10.244.0.7:35110 - 11487 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000105891s
	[INFO] 10.244.0.7:35110 - 31639 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000100284s
	[INFO] 10.244.0.7:35110 - 25746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080168s
	[INFO] 10.244.0.7:35110 - 43819 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000100728s
	[INFO] 10.244.0.7:35110 - 63816 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124028s
	[INFO] 10.244.0.7:35110 - 35022 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000129164s
	[INFO] 10.244.0.7:35110 - 28119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001725128s
	[INFO] 10.244.0.7:50584 - 36630 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148556s
	[INFO] 10.244.0.7:50584 - 36962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067971s
	[INFO] 10.244.0.7:37190 - 758 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052949s
	[INFO] 10.244.0.7:37190 - 1043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051809s
	[INFO] 10.244.0.7:37461 - 4143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057036s
	[INFO] 10.244.0.7:37461 - 4397 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049832s
	[INFO] 10.244.0.7:36180 - 39849 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111086s
	[INFO] 10.244.0.7:36180 - 40050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069757s
	[INFO] 10.244.0.23:54237 - 52266 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001020809s
	[INFO] 10.244.0.23:46188 - 47837 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000755825s
	[INFO] 10.244.0.23:50620 - 40298 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145474s
	[INFO] 10.244.0.23:46344 - 40921 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123896s
	[INFO] 10.244.0.23:50353 - 65439 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000272665s
	[INFO] 10.244.0.23:50633 - 23346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143762s
	[INFO] 10.244.0.23:52616 - 28857 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002777615s
	[INFO] 10.244.0.23:55533 - 44086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003112269s
	[INFO] 10.244.0.27:55844 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000811242s
	[INFO] 10.244.0.27:51921 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000498985s
	
	
	==> describe nodes <==
	Name:               addons-535714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-535714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-535714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-535714
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-535714"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:57:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-535714
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:02:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:01:25 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:01:25 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:01:25 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:01:25 +0000   Thu, 02 Oct 2025 06:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    addons-535714
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 26ed18e3cae343e2ba2a85be4a0a7371
	  System UUID:                26ed18e3-cae3-43e2-ba2a-85be4a0a7371
	  Boot ID:                    73babc46-f812-4e67-b425-db513a204e97
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  default                     cloud-spanner-emulator-85f6b7fc65-hh72s     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  gadget                      gadget-2hn79                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  headlamp                    headlamp-85f8f8dc54-h7d97                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jcwrw    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m19s
	  kube-system                 amd-gpu-device-plugin-f7qcs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 coredns-66bc5c9577-w7hjm                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m28s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 csi-hostpathplugin-8sjk8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 etcd-addons-535714                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m35s
	  kube-system                 kube-apiserver-addons-535714                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-controller-manager-addons-535714       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-z495t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-addons-535714                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 registry-66898fdd98-rc8tq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 registry-proxy-d9npj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-g4hd4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-knwl8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  local-path-storage          local-path-provisioner-648f6765c9-gf62q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-hpzfn              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m27s  kube-proxy       
	  Normal  Starting                 4m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m33s  kubelet          Node addons-535714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s  kubelet          Node addons-535714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s  kubelet          Node addons-535714 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m32s  kubelet          Node addons-535714 status is now: NodeReady
	  Normal  RegisteredNode           4m30s  node-controller  Node addons-535714 event: Registered Node addons-535714 in Controller
	
	
	==> dmesg <==
	[  +0.137860] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.011507] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 2 06:58] kauditd_printk_skb: 318 callbacks suppressed
	[  +0.288437] kauditd_printk_skb: 263 callbacks suppressed
	[  +1.352194] hrtimer: interrupt took 6208704 ns
	[  +0.000018] kauditd_printk_skb: 341 callbacks suppressed
	[ +14.846599] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.362026] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.319606] kauditd_printk_skb: 17 callbacks suppressed
	[Oct 2 06:59] kauditd_printk_skb: 20 callbacks suppressed
	[ +33.860109] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.779557] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 2 07:00] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.976810] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.109220] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.510995] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.560914] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.223140] kauditd_printk_skb: 56 callbacks suppressed
	[Oct 2 07:01] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.884695] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.185211] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.060908] kauditd_printk_skb: 58 callbacks suppressed
	[Oct 2 07:02] kauditd_printk_skb: 10 callbacks suppressed
	[  +1.331616] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2] <==
	{"level":"info","ts":"2025-10-02T06:59:53.208488Z","caller":"traceutil/trace.go:172","msg":"trace[1571130924] linearizableReadLoop","detail":"{readStateIndex:1089; appliedIndex:1089; }","duration":"175.96626ms","start":"2025-10-02T06:59:53.032493Z","end":"2025-10-02T06:59:53.208460Z","steps":["trace[1571130924] 'read index received'  (duration: 175.96071ms)","trace[1571130924] 'applied index is now lower than readState.Index'  (duration: 4.861µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T06:59:53.208674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.101833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:53.208717Z","caller":"traceutil/trace.go:172","msg":"trace[479928917] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1046; }","duration":"176.204822ms","start":"2025-10-02T06:59:53.032489Z","end":"2025-10-02T06:59:53.208694Z","steps":["trace[479928917] 'agreement among raft nodes before linearized reading'  (duration: 176.07943ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T06:59:53.208700Z","caller":"traceutil/trace.go:172","msg":"trace[2123083597] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"180.986558ms","start":"2025-10-02T06:59:53.027698Z","end":"2025-10-02T06:59:53.208685Z","steps":["trace[2123083597] 'process raft request'  (duration: 180.811375ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T06:59:53.208918Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.067675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:53.208937Z","caller":"traceutil/trace.go:172","msg":"trace[243081511] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots; range_end:; response_count:0; response_revision:1047; }","duration":"115.092053ms","start":"2025-10-02T06:59:53.093840Z","end":"2025-10-02T06:59:53.208932Z","steps":["trace[243081511] 'agreement among raft nodes before linearized reading'  (duration: 115.052825ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T06:59:59.120527Z","caller":"traceutil/trace.go:172","msg":"trace[1214600109] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"213.300454ms","start":"2025-10-02T06:59:58.907210Z","end":"2025-10-02T06:59:59.120511Z","steps":["trace[1214600109] 'process raft request'  (duration: 213.149263ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T06:59:59.120654Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.224875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:59.120682Z","caller":"traceutil/trace.go:172","msg":"trace[750259747] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"205.273849ms","start":"2025-10-02T06:59:58.915403Z","end":"2025-10-02T06:59:59.120677Z","steps":["trace[750259747] 'agreement among raft nodes before linearized reading'  (duration: 205.180538ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T06:59:59.120512Z","caller":"traceutil/trace.go:172","msg":"trace[1384240821] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"205.082518ms","start":"2025-10-02T06:59:58.915407Z","end":"2025-10-02T06:59:59.120489Z","steps":["trace[1384240821] 'read index received'  (duration: 205.072637ms)","trace[1384240821] 'applied index is now lower than readState.Index'  (duration: 8.699µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T06:59:59.121116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.148075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-46z2n\" limit:1 ","response":"range_response_count:1 size:4635"}
	{"level":"info","ts":"2025-10-02T06:59:59.121160Z","caller":"traceutil/trace.go:172","msg":"trace[787006594] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-46z2n; range_end:; response_count:1; response_revision:1085; }","duration":"198.245202ms","start":"2025-10-02T06:59:58.922907Z","end":"2025-10-02T06:59:59.121152Z","steps":["trace[787006594] 'agreement among raft nodes before linearized reading'  (duration: 198.083065ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T06:59:59.121300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.835357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:59.121339Z","caller":"traceutil/trace.go:172","msg":"trace[1316712396] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"183.87568ms","start":"2025-10-02T06:59:58.937457Z","end":"2025-10-02T06:59:59.121332Z","steps":["trace[1316712396] 'agreement among raft nodes before linearized reading'  (duration: 183.815946ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:32.832647Z","caller":"traceutil/trace.go:172","msg":"trace[1453851995] linearizableReadLoop","detail":"{readStateIndex:1231; appliedIndex:1231; }","duration":"220.066962ms","start":"2025-10-02T07:00:32.612509Z","end":"2025-10-02T07:00:32.832576Z","steps":["trace[1453851995] 'read index received'  (duration: 220.05963ms)","trace[1453851995] 'applied index is now lower than readState.Index'  (duration: 6.189µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T07:00:32.832730Z","caller":"traceutil/trace.go:172","msg":"trace[302351669] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"243.94686ms","start":"2025-10-02T07:00:32.588772Z","end":"2025-10-02T07:00:32.832719Z","steps":["trace[302351669] 'process raft request'  (duration: 243.833114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:32.832967Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.479862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-02T07:00:32.833001Z","caller":"traceutil/trace.go:172","msg":"trace[1089606970] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1180; }","duration":"220.525584ms","start":"2025-10-02T07:00:32.612469Z","end":"2025-10-02T07:00:32.832995Z","steps":["trace[1089606970] 'agreement among raft nodes before linearized reading'  (duration: 220.422716ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.990824Z","caller":"traceutil/trace.go:172","msg":"trace[1822440841] linearizableReadLoop","detail":"{readStateIndex:1259; appliedIndex:1259; }","duration":"216.288139ms","start":"2025-10-02T07:00:39.774473Z","end":"2025-10-02T07:00:39.990762Z","steps":["trace[1822440841] 'read index received'  (duration: 216.279919ms)","trace[1822440841] 'applied index is now lower than readState.Index'  (duration: 6.642µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:00:39.991358Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.077704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:00:39.991456Z","caller":"traceutil/trace.go:172","msg":"trace[1082597067] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1206; }","duration":"217.190679ms","start":"2025-10-02T07:00:39.774258Z","end":"2025-10-02T07:00:39.991449Z","steps":["trace[1082597067] 'agreement among raft nodes before linearized reading'  (duration: 216.738402ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.992313Z","caller":"traceutil/trace.go:172","msg":"trace[515400758] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"337.963385ms","start":"2025-10-02T07:00:39.654341Z","end":"2025-10-02T07:00:39.992305Z","steps":["trace[515400758] 'process raft request'  (duration: 337.312964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:39.992477Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.654280Z","time spent":"338.099015ms","remote":"127.0.0.1:56776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1205 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-02T07:00:39.994757Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.655974Z","time spent":"338.780211ms","remote":"127.0.0.1:56512","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-10-02T07:02:18.249354Z","caller":"traceutil/trace.go:172","msg":"trace[1937839981] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"110.209012ms","start":"2025-10-02T07:02:18.139042Z","end":"2025-10-02T07:02:18.249251Z","steps":["trace[1937839981] 'process raft request'  (duration: 107.760601ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:02:24 up 5 min,  0 users,  load average: 1.62, 1.55, 0.77
	Linux addons-535714 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68] <==
	W1002 06:59:04.261853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:04.262015       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 06:59:04.262027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:59:04.261865       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:04.262054       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 06:59:04.263426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:59:19.669740       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:19.669928       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 06:59:19.671457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.672416       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.677780       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.698801       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	I1002 06:59:19.813028       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 07:01:02.988144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:59036: use of closed network connection
	E1002 07:01:03.204248       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:59068: use of closed network connection
	I1002 07:01:12.103579       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 07:01:12.401820       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.38.17"}
	I1002 07:01:12.978874       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.6.38"}
	I1002 07:01:20.686056       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20] <==
	I1002 06:57:54.853309       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 06:57:54.853402       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 06:57:54.853436       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 06:57:54.854794       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 06:57:54.854865       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:57:54.855046       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 06:57:54.858148       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 06:57:54.858221       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 06:57:54.858258       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 06:57:54.858263       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 06:57:54.858268       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 06:57:54.860904       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:57:54.863351       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 06:57:54.869106       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-535714" podCIDRs=["10.244.0.0/24"]
	E1002 06:58:03.439760       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:58:24.819245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:24.819664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 06:58:24.819801       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 06:58:24.847762       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 06:58:24.855798       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 06:58:24.921306       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:58:24.957046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:58:54.928427       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:54.966681       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 07:01:16.701698       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	
	
	==> kube-proxy [fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b] <==
	I1002 06:57:56.940558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:57:57.042011       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:57:57.042117       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.164"]
	E1002 06:57:57.042205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:57:57.167383       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 06:57:57.167427       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 06:57:57.167460       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:57:57.190949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:57:57.192886       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:57:57.192902       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:57:57.294325       1 config.go:200] "Starting service config controller"
	I1002 06:57:57.294358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:57:57.294429       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:57:57.294434       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:57:57.294455       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:57:57.294459       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:57:57.438397       1 config.go:309] "Starting node config controller"
	I1002 06:57:57.441950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:57:57.479963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:57:57.494463       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:57:57.494530       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:57:57.494543       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca] <==
	E1002 06:57:47.853654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:57:47.853709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:47.853767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:57:47.853824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:57:47.854040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:47.855481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:47.854491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.707149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:57:48.761606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:57:48.783806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:57:48.817274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:57:48.856898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:48.856969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 06:57:48.860214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:57:48.880906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.896863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:48.913429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:57:48.964287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:57:48.985241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:57:49.005874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:57:49.118344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:57:49.123456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:49.157781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:57:49.202768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 06:57:51.042340       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:01:45 addons-535714 kubelet[1509]: E1002 07:01:45.376281    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-hpzfn" podUID="9071de7c-4e8a-43a9-893f-bdbd130175ef"
	Oct 02 07:01:51 addons-535714 kubelet[1509]: E1002 07:01:51.630400    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388511629508715  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:471727}  inodes_used:{value:166}}"
	Oct 02 07:01:51 addons-535714 kubelet[1509]: E1002 07:01:51.630451    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388511629508715  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:471727}  inodes_used:{value:166}}"
	Oct 02 07:01:52 addons-535714 kubelet[1509]: I1002 07:01:52.173795    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-rc8tq" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:01:56 addons-535714 kubelet[1509]: I1002 07:01:56.173550    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-hh72s" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:02:00 addons-535714 kubelet[1509]: E1002 07:02:00.176186    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-hpzfn" podUID="9071de7c-4e8a-43a9-893f-bdbd130175ef"
	Oct 02 07:02:01 addons-535714 kubelet[1509]: E1002 07:02:01.633424    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388521633042502  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:471727}  inodes_used:{value:166}}"
	Oct 02 07:02:01 addons-535714 kubelet[1509]: E1002 07:02:01.633468    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388521633042502  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:471727}  inodes_used:{value:166}}"
	Oct 02 07:02:09 addons-535714 kubelet[1509]: I1002 07:02:09.173270    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-d9npj" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:02:11 addons-535714 kubelet[1509]: E1002 07:02:11.636749    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388531636177547  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:471727}  inodes_used:{value:166}}"
	Oct 02 07:02:11 addons-535714 kubelet[1509]: E1002 07:02:11.636777    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388531636177547  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:471727}  inodes_used:{value:166}}"
	Oct 02 07:02:15 addons-535714 kubelet[1509]: E1002 07:02:15.178849    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-hpzfn" podUID="9071de7c-4e8a-43a9-893f-bdbd130175ef"
	Oct 02 07:02:15 addons-535714 kubelet[1509]: E1002 07:02:15.475141    1509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 07:02:15 addons-535714 kubelet[1509]: E1002 07:02:15.475259    1509 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 07:02:15 addons-535714 kubelet[1509]: E1002 07:02:15.476594    1509 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(c134160b-cfc5-4bda-9771-650c3dc1da25): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:02:15 addons-535714 kubelet[1509]: E1002 07:02:15.476662    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:02:16 addons-535714 kubelet[1509]: I1002 07:02:16.175577    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:02:16 addons-535714 kubelet[1509]: E1002 07:02:16.239438    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:02:20 addons-535714 kubelet[1509]: I1002 07:02:20.313893    1509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-85f8f8dc54-h7d97" podStartSLOduration=0.998843831 podStartE2EDuration="1m7.313865444s" podCreationTimestamp="2025-10-02 07:01:13 +0000 UTC" firstStartedPulling="2025-10-02 07:01:13.647713199 +0000 UTC m=+202.652885098" lastFinishedPulling="2025-10-02 07:02:19.962734824 +0000 UTC m=+268.967906711" observedRunningTime="2025-10-02 07:02:20.310798349 +0000 UTC m=+269.315970257" watchObservedRunningTime="2025-10-02 07:02:20.313865444 +0000 UTC m=+269.319037349"
	Oct 02 07:02:21 addons-535714 kubelet[1509]: E1002 07:02:21.642535    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388541641938014  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:485553}  inodes_used:{value:171}}"
	Oct 02 07:02:21 addons-535714 kubelet[1509]: E1002 07:02:21.642588    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388541641938014  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:485553}  inodes_used:{value:171}}"
	Oct 02 07:02:24 addons-535714 kubelet[1509]: I1002 07:02:24.174183    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-f7qcs" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:02:24 addons-535714 kubelet[1509]: I1002 07:02:24.787353    1509 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rvw\" (UniqueName: \"kubernetes.io/projected/d2d9e839-b6d4-4982-9fb1-a58db70a15c8-kube-api-access-s4rvw\") pod \"d2d9e839-b6d4-4982-9fb1-a58db70a15c8\" (UID: \"d2d9e839-b6d4-4982-9fb1-a58db70a15c8\") "
	Oct 02 07:02:24 addons-535714 kubelet[1509]: I1002 07:02:24.790560    1509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d9e839-b6d4-4982-9fb1-a58db70a15c8-kube-api-access-s4rvw" (OuterVolumeSpecName: "kube-api-access-s4rvw") pod "d2d9e839-b6d4-4982-9fb1-a58db70a15c8" (UID: "d2d9e839-b6d4-4982-9fb1-a58db70a15c8"). InnerVolumeSpecName "kube-api-access-s4rvw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 07:02:24 addons-535714 kubelet[1509]: I1002 07:02:24.887830    1509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s4rvw\" (UniqueName: \"kubernetes.io/projected/d2d9e839-b6d4-4982-9fb1-a58db70a15c8-kube-api-access-s4rvw\") on node \"addons-535714\" DevicePath \"\""
	
	
	==> storage-provisioner [0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0] <==
	W1002 07:02:00.648976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:02.654919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:02.663800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:04.667428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:04.672845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:06.677656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:06.683900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:08.688340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:08.698661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:10.703394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:10.711031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:12.714620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:12.719859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:14.725483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:14.732519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:16.737047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:16.750149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:18.757409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:18.766861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:20.772246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:20.780430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:22.785942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:22.791758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:24.800498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:24.808492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
helpers_test.go:269: (dbg) Run:  kubectl --context addons-535714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n yakd-dashboard-5ff678cb9-hpzfn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-535714 describe pod nginx ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n yakd-dashboard-5ff678cb9-hpzfn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-535714 describe pod nginx ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n yakd-dashboard-5ff678cb9-hpzfn: exit status 1 (71.531022ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:01:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxhkh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jxhkh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  74s   default-scheduler  Successfully assigned default/nginx to addons-535714
	  Normal   Pulling    73s   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     11s   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     11s   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     10s   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jsw7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-46z2n" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-hpzfn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-535714 describe pod nginx ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n yakd-dashboard-5ff678cb9-hpzfn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (75.02s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-535714 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-535714 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-535714 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c134160b-cfc5-4bda-9771-650c3dc1da25] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-02 07:09:12.67844922 +0000 UTC m=+730.846428007
addons_test.go:252: (dbg) Run:  kubectl --context addons-535714 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-535714 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-535714/192.168.39.164
Start Time:       Thu, 02 Oct 2025 07:01:12 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.25
IPs:
IP:  10.244.0.25
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxhkh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jxhkh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-535714
Warning  Failed     6m15s (x2 over 6m57s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     90s (x4 over 6m57s)    kubelet            Error: ErrImagePull
Warning  Failed     90s (x2 over 3m31s)    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    22s (x9 over 6m56s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     22s (x9 over 6m56s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    7s (x5 over 7m59s)     kubelet            Pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-535714 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-535714 logs nginx -n default: exit status 1 (65.384467ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-535714 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-535714 -n addons-535714
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 logs -n 25: (1.382379635s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ --download-only -p binary-mirror-257523 --alsologtostderr --binary-mirror http://127.0.0.1:33567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ -p binary-mirror-257523                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ addons  │ enable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ start   │ -p addons-535714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ enable headlamp -p addons-535714 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ ip      │ addons-535714 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │ 02 Oct 25 07:03 UTC │
	│ addons  │ addons-535714 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │ 02 Oct 25 07:03 UTC │
	│ addons  │ addons-535714 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ addons  │ addons-535714 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ addons  │ addons-535714 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:57:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:57:12.613104  566681 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:57:12.613401  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613412  566681 out.go:374] Setting ErrFile to fd 2...
	I1002 06:57:12.613416  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613691  566681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 06:57:12.614327  566681 out.go:368] Setting JSON to false
	I1002 06:57:12.615226  566681 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49183,"bootTime":1759339050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:57:12.615318  566681 start.go:140] virtualization: kvm guest
	I1002 06:57:12.616912  566681 out.go:179] * [addons-535714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:57:12.618030  566681 notify.go:220] Checking for updates...
	I1002 06:57:12.618070  566681 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:57:12.619267  566681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:57:12.620404  566681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:12.621815  566681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:12.622922  566681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:57:12.623998  566681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:57:12.625286  566681 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:57:12.655279  566681 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 06:57:12.656497  566681 start.go:304] selected driver: kvm2
	I1002 06:57:12.656511  566681 start.go:924] validating driver "kvm2" against <nil>
	I1002 06:57:12.656523  566681 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:57:12.657469  566681 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.657563  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.671466  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.671499  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.684735  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.684785  566681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:57:12.685037  566681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:57:12.685069  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:12.685110  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:12.685121  566681 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 06:57:12.685226  566681 start.go:348] cluster config:
	{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1002 06:57:12.685336  566681 iso.go:125] acquiring lock: {Name:mkf098c9edb59acf17bed04e42333d4ed092b943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.687549  566681 out.go:179] * Starting "addons-535714" primary control-plane node in "addons-535714" cluster
	I1002 06:57:12.688758  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:12.688809  566681 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:57:12.688824  566681 cache.go:58] Caching tarball of preloaded images
	I1002 06:57:12.688927  566681 preload.go:233] Found /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:57:12.688941  566681 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:57:12.689355  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:12.689385  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json: {Name:mkd226c1b0f282f7928061e8123511cda66ecb61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:12.689560  566681 start.go:360] acquireMachinesLock for addons-535714: {Name:mk200887a2360c0adfa27edc65d8cb08bb2838a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 06:57:12.689631  566681 start.go:364] duration metric: took 53.377µs to acquireMachinesLock for "addons-535714"
	I1002 06:57:12.689654  566681 start.go:93] Provisioning new machine with config: &{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:12.689738  566681 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 06:57:12.691999  566681 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 06:57:12.692183  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:12.692244  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:12.705101  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I1002 06:57:12.705724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:12.706300  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:12.706320  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:12.706770  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:12.707010  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:12.707209  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:12.707401  566681 start.go:159] libmachine.API.Create for "addons-535714" (driver="kvm2")
	I1002 06:57:12.707450  566681 client.go:168] LocalClient.Create starting
	I1002 06:57:12.707494  566681 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem
	I1002 06:57:12.888250  566681 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem
	I1002 06:57:13.081005  566681 main.go:141] libmachine: Running pre-create checks...
	I1002 06:57:13.081030  566681 main.go:141] libmachine: (addons-535714) Calling .PreCreateCheck
	I1002 06:57:13.081598  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:13.082053  566681 main.go:141] libmachine: Creating machine...
	I1002 06:57:13.082069  566681 main.go:141] libmachine: (addons-535714) Calling .Create
	I1002 06:57:13.082276  566681 main.go:141] libmachine: (addons-535714) creating domain...
	I1002 06:57:13.082300  566681 main.go:141] libmachine: (addons-535714) creating network...
	I1002 06:57:13.083762  566681 main.go:141] libmachine: (addons-535714) DBG | found existing default network
	I1002 06:57:13.084004  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084021  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>default</name>
	I1002 06:57:13.084029  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 06:57:13.084036  566681 main.go:141] libmachine: (addons-535714) DBG |   <forward mode='nat'>
	I1002 06:57:13.084041  566681 main.go:141] libmachine: (addons-535714) DBG |     <nat>
	I1002 06:57:13.084047  566681 main.go:141] libmachine: (addons-535714) DBG |       <port start='1024' end='65535'/>
	I1002 06:57:13.084051  566681 main.go:141] libmachine: (addons-535714) DBG |     </nat>
	I1002 06:57:13.084055  566681 main.go:141] libmachine: (addons-535714) DBG |   </forward>
	I1002 06:57:13.084061  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 06:57:13.084068  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 06:57:13.084084  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 06:57:13.084098  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084111  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 06:57:13.084123  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084131  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084152  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084191  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084749  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.084601  566709 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1002 06:57:13.084771  566681 main.go:141] libmachine: (addons-535714) DBG | defining private network:
	I1002 06:57:13.084780  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084785  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084801  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.084820  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.084831  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.084840  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084851  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.084861  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084868  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084878  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084888  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.090767  566681 main.go:141] libmachine: (addons-535714) DBG | creating private network mk-addons-535714 192.168.39.0/24...
	I1002 06:57:13.158975  566681 main.go:141] libmachine: (addons-535714) DBG | private network mk-addons-535714 192.168.39.0/24 created
	I1002 06:57:13.159275  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.159307  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.159316  566681 main.go:141] libmachine: (addons-535714) setting up store path in /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.159335  566681 main.go:141] libmachine: (addons-535714) building disk image from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 06:57:13.159343  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>30f68bcb-0ec3-45ac-9012-251c5feb215b</uuid>
	I1002 06:57:13.159350  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 06:57:13.159356  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:03:a3:ce'/>
	I1002 06:57:13.159360  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.159383  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.159402  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.159413  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.159428  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.159461  566681 main.go:141] libmachine: (addons-535714) Downloading /home/jenkins/minikube-integration/21643-562157/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 06:57:13.159477  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.159489  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.159500  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.159522  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.159293  566709 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.427161  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.426986  566709 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa...
	I1002 06:57:13.691596  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691434  566709 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk...
	I1002 06:57:13.691620  566681 main.go:141] libmachine: (addons-535714) DBG | Writing magic tar header
	I1002 06:57:13.691651  566681 main.go:141] libmachine: (addons-535714) DBG | Writing SSH key tar header
	I1002 06:57:13.691660  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691559  566709 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.691671  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714
	I1002 06:57:13.691678  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines
	I1002 06:57:13.691687  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.691694  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157
	I1002 06:57:13.691702  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 06:57:13.691710  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins
	I1002 06:57:13.691724  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 (perms=drwx------)
	I1002 06:57:13.691738  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines (perms=drwxr-xr-x)
	I1002 06:57:13.691747  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home
	I1002 06:57:13.691758  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube (perms=drwxr-xr-x)
	I1002 06:57:13.691769  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157 (perms=drwxrwxr-x)
	I1002 06:57:13.691781  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 06:57:13.691789  566681 main.go:141] libmachine: (addons-535714) DBG | skipping /home - not owner
	I1002 06:57:13.691803  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 06:57:13.691811  566681 main.go:141] libmachine: (addons-535714) defining domain...
	I1002 06:57:13.693046  566681 main.go:141] libmachine: (addons-535714) defining domain using XML: 
	I1002 06:57:13.693074  566681 main.go:141] libmachine: (addons-535714) <domain type='kvm'>
	I1002 06:57:13.693080  566681 main.go:141] libmachine: (addons-535714)   <name>addons-535714</name>
	I1002 06:57:13.693085  566681 main.go:141] libmachine: (addons-535714)   <memory unit='MiB'>4096</memory>
	I1002 06:57:13.693090  566681 main.go:141] libmachine: (addons-535714)   <vcpu>2</vcpu>
	I1002 06:57:13.693093  566681 main.go:141] libmachine: (addons-535714)   <features>
	I1002 06:57:13.693098  566681 main.go:141] libmachine: (addons-535714)     <acpi/>
	I1002 06:57:13.693102  566681 main.go:141] libmachine: (addons-535714)     <apic/>
	I1002 06:57:13.693109  566681 main.go:141] libmachine: (addons-535714)     <pae/>
	I1002 06:57:13.693115  566681 main.go:141] libmachine: (addons-535714)   </features>
	I1002 06:57:13.693124  566681 main.go:141] libmachine: (addons-535714)   <cpu mode='host-passthrough'>
	I1002 06:57:13.693132  566681 main.go:141] libmachine: (addons-535714)   </cpu>
	I1002 06:57:13.693155  566681 main.go:141] libmachine: (addons-535714)   <os>
	I1002 06:57:13.693163  566681 main.go:141] libmachine: (addons-535714)     <type>hvm</type>
	I1002 06:57:13.693172  566681 main.go:141] libmachine: (addons-535714)     <boot dev='cdrom'/>
	I1002 06:57:13.693186  566681 main.go:141] libmachine: (addons-535714)     <boot dev='hd'/>
	I1002 06:57:13.693192  566681 main.go:141] libmachine: (addons-535714)     <bootmenu enable='no'/>
	I1002 06:57:13.693197  566681 main.go:141] libmachine: (addons-535714)   </os>
	I1002 06:57:13.693202  566681 main.go:141] libmachine: (addons-535714)   <devices>
	I1002 06:57:13.693207  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='cdrom'>
	I1002 06:57:13.693215  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.693220  566681 main.go:141] libmachine: (addons-535714)       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.693225  566681 main.go:141] libmachine: (addons-535714)       <readonly/>
	I1002 06:57:13.693231  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693240  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='disk'>
	I1002 06:57:13.693255  566681 main.go:141] libmachine: (addons-535714)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 06:57:13.693309  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.693334  566681 main.go:141] libmachine: (addons-535714)       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.693341  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693357  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693371  566681 main.go:141] libmachine: (addons-535714)       <source network='mk-addons-535714'/>
	I1002 06:57:13.693378  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693391  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693399  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693411  566681 main.go:141] libmachine: (addons-535714)       <source network='default'/>
	I1002 06:57:13.693416  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693435  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693445  566681 main.go:141] libmachine: (addons-535714)     <serial type='pty'>
	I1002 06:57:13.693480  566681 main.go:141] libmachine: (addons-535714)       <target port='0'/>
	I1002 06:57:13.693520  566681 main.go:141] libmachine: (addons-535714)     </serial>
	I1002 06:57:13.693540  566681 main.go:141] libmachine: (addons-535714)     <console type='pty'>
	I1002 06:57:13.693552  566681 main.go:141] libmachine: (addons-535714)       <target type='serial' port='0'/>
	I1002 06:57:13.693564  566681 main.go:141] libmachine: (addons-535714)     </console>
	I1002 06:57:13.693575  566681 main.go:141] libmachine: (addons-535714)     <rng model='virtio'>
	I1002 06:57:13.693588  566681 main.go:141] libmachine: (addons-535714)       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.693598  566681 main.go:141] libmachine: (addons-535714)     </rng>
	I1002 06:57:13.693609  566681 main.go:141] libmachine: (addons-535714)   </devices>
	I1002 06:57:13.693618  566681 main.go:141] libmachine: (addons-535714) </domain>
	I1002 06:57:13.693631  566681 main.go:141] libmachine: (addons-535714) 
	I1002 06:57:13.698471  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:ff:9b:2c in network default
	I1002 06:57:13.699181  566681 main.go:141] libmachine: (addons-535714) starting domain...
	I1002 06:57:13.699210  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:13.699219  566681 main.go:141] libmachine: (addons-535714) ensuring networks are active...
	I1002 06:57:13.699886  566681 main.go:141] libmachine: (addons-535714) Ensuring network default is active
	I1002 06:57:13.700240  566681 main.go:141] libmachine: (addons-535714) Ensuring network mk-addons-535714 is active
	I1002 06:57:13.700911  566681 main.go:141] libmachine: (addons-535714) getting domain XML...
	I1002 06:57:13.701998  566681 main.go:141] libmachine: (addons-535714) DBG | starting domain XML:
	I1002 06:57:13.702019  566681 main.go:141] libmachine: (addons-535714) DBG | <domain type='kvm'>
	I1002 06:57:13.702029  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>addons-535714</name>
	I1002 06:57:13.702036  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>26ed18e3-cae3-43e2-ba2a-85be4a0a7371</uuid>
	I1002 06:57:13.702049  566681 main.go:141] libmachine: (addons-535714) DBG |   <memory unit='KiB'>4194304</memory>
	I1002 06:57:13.702060  566681 main.go:141] libmachine: (addons-535714) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1002 06:57:13.702069  566681 main.go:141] libmachine: (addons-535714) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 06:57:13.702075  566681 main.go:141] libmachine: (addons-535714) DBG |   <os>
	I1002 06:57:13.702085  566681 main.go:141] libmachine: (addons-535714) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 06:57:13.702093  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='cdrom'/>
	I1002 06:57:13.702101  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='hd'/>
	I1002 06:57:13.702116  566681 main.go:141] libmachine: (addons-535714) DBG |     <bootmenu enable='no'/>
	I1002 06:57:13.702127  566681 main.go:141] libmachine: (addons-535714) DBG |   </os>
	I1002 06:57:13.702134  566681 main.go:141] libmachine: (addons-535714) DBG |   <features>
	I1002 06:57:13.702180  566681 main.go:141] libmachine: (addons-535714) DBG |     <acpi/>
	I1002 06:57:13.702204  566681 main.go:141] libmachine: (addons-535714) DBG |     <apic/>
	I1002 06:57:13.702215  566681 main.go:141] libmachine: (addons-535714) DBG |     <pae/>
	I1002 06:57:13.702220  566681 main.go:141] libmachine: (addons-535714) DBG |   </features>
	I1002 06:57:13.702241  566681 main.go:141] libmachine: (addons-535714) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 06:57:13.702256  566681 main.go:141] libmachine: (addons-535714) DBG |   <clock offset='utc'/>
	I1002 06:57:13.702265  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 06:57:13.702283  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_reboot>restart</on_reboot>
	I1002 06:57:13.702295  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_crash>destroy</on_crash>
	I1002 06:57:13.702305  566681 main.go:141] libmachine: (addons-535714) DBG |   <devices>
	I1002 06:57:13.702317  566681 main.go:141] libmachine: (addons-535714) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 06:57:13.702328  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='cdrom'>
	I1002 06:57:13.702340  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw'/>
	I1002 06:57:13.702352  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.702364  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.702375  566681 main.go:141] libmachine: (addons-535714) DBG |       <readonly/>
	I1002 06:57:13.702387  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 06:57:13.702398  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702419  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='disk'>
	I1002 06:57:13.702432  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 06:57:13.702451  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.702462  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.702472  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 06:57:13.702482  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702490  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 06:57:13.702503  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 06:57:13.702512  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702521  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 06:57:13.702535  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 06:57:13.702589  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 06:57:13.702612  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702624  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702630  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:00:74:bc'/>
	I1002 06:57:13.702639  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='mk-addons-535714'/>
	I1002 06:57:13.702646  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702658  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 06:57:13.702665  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702675  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702687  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:ff:9b:2c'/>
	I1002 06:57:13.702697  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='default'/>
	I1002 06:57:13.702707  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702719  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 06:57:13.702730  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702740  566681 main.go:141] libmachine: (addons-535714) DBG |     <serial type='pty'>
	I1002 06:57:13.702751  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='isa-serial' port='0'>
	I1002 06:57:13.702765  566681 main.go:141] libmachine: (addons-535714) DBG |         <model name='isa-serial'/>
	I1002 06:57:13.702775  566681 main.go:141] libmachine: (addons-535714) DBG |       </target>
	I1002 06:57:13.702784  566681 main.go:141] libmachine: (addons-535714) DBG |     </serial>
	I1002 06:57:13.702806  566681 main.go:141] libmachine: (addons-535714) DBG |     <console type='pty'>
	I1002 06:57:13.702820  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='serial' port='0'/>
	I1002 06:57:13.702827  566681 main.go:141] libmachine: (addons-535714) DBG |     </console>
	I1002 06:57:13.702839  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='mouse' bus='ps2'/>
	I1002 06:57:13.702850  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 06:57:13.702861  566681 main.go:141] libmachine: (addons-535714) DBG |     <audio id='1' type='none'/>
	I1002 06:57:13.702881  566681 main.go:141] libmachine: (addons-535714) DBG |     <memballoon model='virtio'>
	I1002 06:57:13.702895  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 06:57:13.702901  566681 main.go:141] libmachine: (addons-535714) DBG |     </memballoon>
	I1002 06:57:13.702910  566681 main.go:141] libmachine: (addons-535714) DBG |     <rng model='virtio'>
	I1002 06:57:13.702918  566681 main.go:141] libmachine: (addons-535714) DBG |       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.702929  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 06:57:13.702944  566681 main.go:141] libmachine: (addons-535714) DBG |     </rng>
	I1002 06:57:13.702957  566681 main.go:141] libmachine: (addons-535714) DBG |   </devices>
	I1002 06:57:13.702972  566681 main.go:141] libmachine: (addons-535714) DBG | </domain>
	I1002 06:57:13.702987  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:14.963247  566681 main.go:141] libmachine: (addons-535714) waiting for domain to start...
	I1002 06:57:14.964664  566681 main.go:141] libmachine: (addons-535714) domain is now running
	I1002 06:57:14.964695  566681 main.go:141] libmachine: (addons-535714) waiting for IP...
	I1002 06:57:14.965420  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:14.966032  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:14.966060  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:14.966362  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:14.966431  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:14.966367  566709 retry.go:31] will retry after 210.201926ms: waiting for domain to come up
	I1002 06:57:15.178058  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.178797  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.178832  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.179051  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.179089  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.179030  566709 retry.go:31] will retry after 312.318729ms: waiting for domain to come up
	I1002 06:57:15.493036  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.493844  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.493865  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.494158  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.494260  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.494172  566709 retry.go:31] will retry after 379.144998ms: waiting for domain to come up
	I1002 06:57:15.874866  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.875597  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.875618  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.875940  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.875972  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.875891  566709 retry.go:31] will retry after 392.719807ms: waiting for domain to come up
	I1002 06:57:16.270678  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.271369  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.271417  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.271795  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.271822  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.271752  566709 retry.go:31] will retry after 502.852746ms: waiting for domain to come up
	I1002 06:57:16.776382  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.777033  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.777083  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.777418  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.777452  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.777390  566709 retry.go:31] will retry after 817.041708ms: waiting for domain to come up
	I1002 06:57:17.596403  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:17.597002  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:17.597037  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:17.597304  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:17.597337  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:17.597286  566709 retry.go:31] will retry after 1.129250566s: waiting for domain to come up
	I1002 06:57:18.728727  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:18.729410  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:18.729438  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:18.729739  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:18.729770  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:18.729716  566709 retry.go:31] will retry after 1.486801145s: waiting for domain to come up
	I1002 06:57:20.218801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:20.219514  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:20.219546  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:20.219811  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:20.219864  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:20.219802  566709 retry.go:31] will retry after 1.676409542s: waiting for domain to come up
	I1002 06:57:21.898812  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:21.899513  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:21.899536  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:21.899819  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:21.899877  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:21.899808  566709 retry.go:31] will retry after 1.43578276s: waiting for domain to come up
	I1002 06:57:23.337598  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:23.338214  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:23.338235  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:23.338569  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:23.338642  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:23.338553  566709 retry.go:31] will retry after 2.182622976s: waiting for domain to come up
	I1002 06:57:25.524305  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:25.524996  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:25.525030  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:25.525352  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:25.525383  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:25.525329  566709 retry.go:31] will retry after 2.567637867s: waiting for domain to come up
	I1002 06:57:28.094839  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:28.095351  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:28.095371  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:28.095666  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:28.095696  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:28.095635  566709 retry.go:31] will retry after 3.838879921s: waiting for domain to come up
	I1002 06:57:31.938799  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939560  566681 main.go:141] libmachine: (addons-535714) found domain IP: 192.168.39.164
	I1002 06:57:31.939593  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has current primary IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939601  566681 main.go:141] libmachine: (addons-535714) reserving static IP address...
	I1002 06:57:31.940101  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find host DHCP lease matching {name: "addons-535714", mac: "52:54:00:00:74:bc", ip: "192.168.39.164"} in network mk-addons-535714
	I1002 06:57:32.153010  566681 main.go:141] libmachine: (addons-535714) DBG | Getting to WaitForSSH function...
	I1002 06:57:32.153043  566681 main.go:141] libmachine: (addons-535714) reserved static IP address 192.168.39.164 for domain addons-535714
	I1002 06:57:32.153056  566681 main.go:141] libmachine: (addons-535714) waiting for SSH...
	I1002 06:57:32.156675  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157263  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.157288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157522  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH client type: external
	I1002 06:57:32.157548  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH private key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa (-rw-------)
	I1002 06:57:32.157582  566681 main.go:141] libmachine: (addons-535714) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 06:57:32.157609  566681 main.go:141] libmachine: (addons-535714) DBG | About to run SSH command:
	I1002 06:57:32.157620  566681 main.go:141] libmachine: (addons-535714) DBG | exit 0
	I1002 06:57:32.286418  566681 main.go:141] libmachine: (addons-535714) DBG | SSH cmd err, output: <nil>: 
	I1002 06:57:32.286733  566681 main.go:141] libmachine: (addons-535714) domain creation complete
	I1002 06:57:32.287044  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:32.287640  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288020  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288207  566681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 06:57:32.288223  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:32.289782  566681 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 06:57:32.289795  566681 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 06:57:32.289800  566681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 06:57:32.289805  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.292433  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.292851  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.292897  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.293050  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.293317  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293481  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293658  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.293813  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.294063  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.294076  566681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 06:57:32.392654  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.392681  566681 main.go:141] libmachine: Detecting the provisioner...
	I1002 06:57:32.392690  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.396029  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396454  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.396486  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396681  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.396903  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397079  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397260  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.397412  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.397680  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.397696  566681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 06:57:32.501992  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 06:57:32.502093  566681 main.go:141] libmachine: found compatible host: buildroot
	I1002 06:57:32.502117  566681 main.go:141] libmachine: Provisioning with buildroot...
	I1002 06:57:32.502131  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502439  566681 buildroot.go:166] provisioning hostname "addons-535714"
	I1002 06:57:32.502476  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502701  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.506170  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506653  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.506716  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506786  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.507040  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507252  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507426  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.507729  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.507997  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.508013  566681 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-535714 && echo "addons-535714" | sudo tee /etc/hostname
	I1002 06:57:32.632360  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-535714
	
	I1002 06:57:32.632404  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.635804  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636293  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.636319  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636574  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.636804  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.636969  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.637110  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.637297  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.637584  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.637613  566681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-535714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-535714/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-535714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:57:32.752063  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.752119  566681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21643-562157/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-562157/.minikube}
	I1002 06:57:32.752193  566681 buildroot.go:174] setting up certificates
	I1002 06:57:32.752210  566681 provision.go:84] configureAuth start
	I1002 06:57:32.752256  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.752721  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:32.756026  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756514  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.756545  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756704  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.759506  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.759945  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.759972  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.760113  566681 provision.go:143] copyHostCerts
	I1002 06:57:32.760210  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem (1123 bytes)
	I1002 06:57:32.760331  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem (1675 bytes)
	I1002 06:57:32.760392  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem (1078 bytes)
	I1002 06:57:32.760440  566681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem org=jenkins.addons-535714 san=[127.0.0.1 192.168.39.164 addons-535714 localhost minikube]
	I1002 06:57:32.997259  566681 provision.go:177] copyRemoteCerts
	I1002 06:57:32.997339  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:57:32.997365  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.001746  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002246  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.002275  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002606  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.002841  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.003067  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.003261  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.087811  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:57:33.120074  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:57:33.152344  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:57:33.183560  566681 provision.go:87] duration metric: took 431.305231ms to configureAuth
	I1002 06:57:33.183592  566681 buildroot.go:189] setting minikube options for container-runtime
	I1002 06:57:33.183785  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:33.183901  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.187438  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.187801  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.187825  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.188034  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.188285  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188508  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188682  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.188927  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.189221  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.189246  566681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:57:33.455871  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:57:33.455896  566681 main.go:141] libmachine: Checking connection to Docker...
	I1002 06:57:33.455904  566681 main.go:141] libmachine: (addons-535714) Calling .GetURL
	I1002 06:57:33.457296  566681 main.go:141] libmachine: (addons-535714) DBG | using libvirt version 8000000
	I1002 06:57:33.460125  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460550  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.460582  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460738  566681 main.go:141] libmachine: Docker is up and running!
	I1002 06:57:33.460770  566681 main.go:141] libmachine: Reticulating splines...
	I1002 06:57:33.460780  566681 client.go:171] duration metric: took 20.753318284s to LocalClient.Create
	I1002 06:57:33.460805  566681 start.go:167] duration metric: took 20.753406484s to libmachine.API.Create "addons-535714"
	I1002 06:57:33.460815  566681 start.go:293] postStartSetup for "addons-535714" (driver="kvm2")
	I1002 06:57:33.460824  566681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:57:33.460841  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.461104  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:57:33.461149  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.463666  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464001  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.464024  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.464486  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.464662  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.464805  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.547032  566681 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:57:33.552379  566681 info.go:137] Remote host: Buildroot 2025.02
	I1002 06:57:33.552408  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/addons for local assets ...
	I1002 06:57:33.552489  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/files for local assets ...
	I1002 06:57:33.552524  566681 start.go:296] duration metric: took 91.702797ms for postStartSetup
	I1002 06:57:33.552573  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:33.553229  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.556294  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556659  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.556691  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556979  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:33.557200  566681 start.go:128] duration metric: took 20.867433906s to createHost
	I1002 06:57:33.557235  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.559569  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.559976  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.560033  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.560209  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.560387  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560524  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560647  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.560782  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.561006  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.561024  566681 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 06:57:33.663941  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759388253.625480282
	
	I1002 06:57:33.663966  566681 fix.go:216] guest clock: 1759388253.625480282
	I1002 06:57:33.663974  566681 fix.go:229] Guest: 2025-10-02 06:57:33.625480282 +0000 UTC Remote: 2025-10-02 06:57:33.557215192 +0000 UTC m=+20.980868887 (delta=68.26509ms)
	I1002 06:57:33.664010  566681 fix.go:200] guest clock delta is within tolerance: 68.26509ms
	I1002 06:57:33.664022  566681 start.go:83] releasing machines lock for "addons-535714", held for 20.974372731s
	I1002 06:57:33.664050  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.664374  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.667827  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668310  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.668344  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668518  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669079  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669275  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669418  566681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:57:33.669466  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.669473  566681 ssh_runner.go:195] Run: cat /version.json
	I1002 06:57:33.669492  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.672964  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673168  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673457  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673495  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673642  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673878  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674001  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.674093  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674177  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674268  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.674352  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674502  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.752747  566681 ssh_runner.go:195] Run: systemctl --version
	I1002 06:57:33.777712  566681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:57:33.941402  566681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:57:33.949414  566681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:57:33.949490  566681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:57:33.971089  566681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:57:33.971121  566681 start.go:495] detecting cgroup driver to use...
	I1002 06:57:33.971215  566681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:57:33.990997  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:57:34.009642  566681 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:57:34.009719  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:57:34.028675  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:57:34.045011  566681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:57:34.191090  566681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:57:34.404836  566681 docker.go:234] disabling docker service ...
	I1002 06:57:34.404915  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:57:34.421846  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:57:34.437815  566681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:57:34.593256  566681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:57:34.739807  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:57:34.755656  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:57:34.780318  566681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:57:34.780381  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.794344  566681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 06:57:34.794437  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.807921  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.821174  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.834265  566681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:57:34.848039  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.861013  566681 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.882928  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.895874  566681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:57:34.906834  566681 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 06:57:34.906902  566681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 06:57:34.930283  566681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:57:34.944196  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:35.086744  566681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:57:35.203118  566681 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:57:35.203247  566681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:57:35.208872  566681 start.go:563] Will wait 60s for crictl version
	I1002 06:57:35.208951  566681 ssh_runner.go:195] Run: which crictl
	I1002 06:57:35.213165  566681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 06:57:35.254690  566681 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 06:57:35.254809  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.285339  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.318360  566681 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 06:57:35.319680  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:35.322840  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323187  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:35.323215  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323541  566681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 06:57:35.328294  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:35.344278  566681 kubeadm.go:883] updating cluster {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:57:35.344381  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:35.344426  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:35.382419  566681 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 06:57:35.382487  566681 ssh_runner.go:195] Run: which lz4
	I1002 06:57:35.386980  566681 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 06:57:35.392427  566681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 06:57:35.392457  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1002 06:57:36.901929  566681 crio.go:462] duration metric: took 1.514994717s to copy over tarball
	I1002 06:57:36.902020  566681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 06:57:38.487982  566681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.585912508s)
	I1002 06:57:38.488018  566681 crio.go:469] duration metric: took 1.586055344s to extract the tarball
	I1002 06:57:38.488028  566681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 06:57:38.530041  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:38.574743  566681 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:57:38.574771  566681 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:57:38.574780  566681 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.34.1 crio true true} ...
	I1002 06:57:38.574907  566681 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-535714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:57:38.574982  566681 ssh_runner.go:195] Run: crio config
	I1002 06:57:38.626077  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:38.626100  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:38.626114  566681 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:57:38.626157  566681 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-535714 NodeName:addons-535714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:57:38.626290  566681 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-535714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:57:38.626379  566681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:57:38.638875  566681 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:57:38.638942  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:57:38.650923  566681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1002 06:57:38.672765  566681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:57:38.695198  566681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 06:57:38.716738  566681 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I1002 06:57:38.721153  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:38.736469  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:38.882003  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:38.903662  566681 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714 for IP: 192.168.39.164
	I1002 06:57:38.903695  566681 certs.go:195] generating shared ca certs ...
	I1002 06:57:38.903722  566681 certs.go:227] acquiring lock for ca certs: {Name:mk8e87648e070d331709ecc08a93a441c20cc0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.903919  566681 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key
	I1002 06:57:38.961629  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt ...
	I1002 06:57:38.961659  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt: {Name:mkce3dd067e2e7843e2a288d28dbaf57f057aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961829  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key ...
	I1002 06:57:38.961841  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key: {Name:mka327360c05168b3164194068242bb15d511ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961939  566681 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key
	I1002 06:57:39.050167  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt ...
	I1002 06:57:39.050199  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt: {Name:mkf18fa19ddf5ebcd4669a9a2e369e414c03725b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050375  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key ...
	I1002 06:57:39.050388  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key: {Name:mk774f61354e64c5344d2d0d059164fff9076c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050460  566681 certs.go:257] generating profile certs ...
	I1002 06:57:39.050516  566681 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key
	I1002 06:57:39.050537  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt with IP's: []
	I1002 06:57:39.147298  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt ...
	I1002 06:57:39.147330  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: {Name:mk17b498d515b2f43666faa03b17d7223c9a8157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147495  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key ...
	I1002 06:57:39.147505  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key: {Name:mke1e8140b8916f87dd85d98abe8a51503f6e4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147578  566681 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed
	I1002 06:57:39.147597  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164]
	I1002 06:57:39.310236  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed ...
	I1002 06:57:39.310266  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed: {Name:mk247c08955d8ed7427926c7244db21ffe837768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310428  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed ...
	I1002 06:57:39.310441  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed: {Name:mkc3fa16c2fd82a07eac700fa655e28a42c60f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310525  566681 certs.go:382] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt
	I1002 06:57:39.310624  566681 certs.go:386] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key
	I1002 06:57:39.310682  566681 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key
	I1002 06:57:39.310701  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt with IP's: []
	I1002 06:57:39.497350  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt ...
	I1002 06:57:39.497386  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt: {Name:mk4f28529f4cee1ff8311028b7bb7fc35a77bba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497555  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key ...
	I1002 06:57:39.497569  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key: {Name:mkfac0b0a329edb8634114371202cb4ba011c129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497750  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:57:39.497784  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:57:39.497808  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:57:39.497835  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem (1675 bytes)
	I1002 06:57:39.498475  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:57:39.530649  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:57:39.561340  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:57:39.593844  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:57:39.629628  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:57:39.668367  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:57:39.699924  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:57:39.730177  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:57:39.761107  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:57:39.791592  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:57:39.813294  566681 ssh_runner.go:195] Run: openssl version
	I1002 06:57:39.820587  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:57:39.834664  566681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840283  566681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:57 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840348  566681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.848412  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:57:39.863027  566681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:57:39.868269  566681 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:57:39.868325  566681 kubeadm.go:400] StartCluster: {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:57:39.868408  566681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:57:39.868500  566681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:57:39.910571  566681 cri.go:89] found id: ""
	I1002 06:57:39.910645  566681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:57:39.923825  566681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:57:39.936522  566681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:57:39.949191  566681 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:57:39.949214  566681 kubeadm.go:157] found existing configuration files:
	
	I1002 06:57:39.949292  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:57:39.961561  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:57:39.961637  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:57:39.974337  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:57:39.986029  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:57:39.986104  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:57:39.997992  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.008894  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:57:40.008966  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.021235  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:57:40.032694  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:57:40.032754  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:57:40.045554  566681 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 06:57:40.211362  566681 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:57:51.799597  566681 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:57:51.799689  566681 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:57:51.799798  566681 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:57:51.799950  566681 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:57:51.800082  566681 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:57:51.800206  566681 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:57:51.802349  566681 out.go:252]   - Generating certificates and keys ...
	I1002 06:57:51.802439  566681 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:57:51.802492  566681 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:57:51.802586  566681 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:57:51.802729  566681 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:57:51.802823  566681 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:57:51.802894  566681 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:57:51.802944  566681 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:57:51.803058  566681 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803125  566681 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:57:51.803276  566681 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803350  566681 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:57:51.803420  566681 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:57:51.803491  566681 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:57:51.803557  566681 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:57:51.803634  566681 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:57:51.803717  566681 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:57:51.803807  566681 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:57:51.803899  566681 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:57:51.803950  566681 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:57:51.804029  566681 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:57:51.804088  566681 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:57:51.805702  566681 out.go:252]   - Booting up control plane ...
	I1002 06:57:51.805781  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:57:51.805846  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:57:51.805929  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:57:51.806028  566681 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:57:51.806148  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:57:51.806260  566681 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:57:51.806361  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:57:51.806420  566681 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:57:51.806575  566681 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:57:51.806669  566681 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:57:51.806717  566681 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.672587ms
	I1002 06:57:51.806806  566681 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:57:51.806892  566681 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.164:8443/livez
	I1002 06:57:51.806963  566681 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:57:51.807067  566681 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:57:51.807185  566681 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.362189492s
	I1002 06:57:51.807284  566681 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.802664802s
	I1002 06:57:51.807338  566681 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003805488s
	I1002 06:57:51.807453  566681 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:57:51.807587  566681 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:57:51.807642  566681 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:57:51.807816  566681 kubeadm.go:318] [mark-control-plane] Marking the node addons-535714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:57:51.807890  566681 kubeadm.go:318] [bootstrap-token] Using token: 7tuk3k.1448ee54qv9op8vd
	I1002 06:57:51.810266  566681 out.go:252]   - Configuring RBAC rules ...
	I1002 06:57:51.810355  566681 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:57:51.810443  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:57:51.810582  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:57:51.810746  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:57:51.810922  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:57:51.811039  566681 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:57:51.811131  566681 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:57:51.811203  566681 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:57:51.811259  566681 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:57:51.811271  566681 kubeadm.go:318] 
	I1002 06:57:51.811321  566681 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:57:51.811327  566681 kubeadm.go:318] 
	I1002 06:57:51.811408  566681 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:57:51.811416  566681 kubeadm.go:318] 
	I1002 06:57:51.811438  566681 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:57:51.811524  566681 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:57:51.811568  566681 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:57:51.811574  566681 kubeadm.go:318] 
	I1002 06:57:51.811638  566681 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:57:51.811650  566681 kubeadm.go:318] 
	I1002 06:57:51.811704  566681 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:57:51.811711  566681 kubeadm.go:318] 
	I1002 06:57:51.811751  566681 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:57:51.811811  566681 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:57:51.811912  566681 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:57:51.811926  566681 kubeadm.go:318] 
	I1002 06:57:51.812042  566681 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:57:51.812153  566681 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:57:51.812165  566681 kubeadm.go:318] 
	I1002 06:57:51.812280  566681 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812417  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 \
	I1002 06:57:51.812453  566681 kubeadm.go:318] 	--control-plane 
	I1002 06:57:51.812464  566681 kubeadm.go:318] 
	I1002 06:57:51.812595  566681 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:57:51.812615  566681 kubeadm.go:318] 
	I1002 06:57:51.812711  566681 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812863  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 
	I1002 06:57:51.812931  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:51.812944  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:51.815686  566681 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 06:57:51.817060  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 06:57:51.834402  566681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 06:57:51.858951  566681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:57:51.859117  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:51.859124  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-535714 minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-535714 minikube.k8s.io/primary=true
	I1002 06:57:51.921378  566681 ops.go:34] apiserver oom_adj: -16
	I1002 06:57:52.030323  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:52.531214  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.031113  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.531050  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.030867  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.531128  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.030521  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.530702  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.030762  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.196068  566681 kubeadm.go:1113] duration metric: took 4.337043927s to wait for elevateKubeSystemPrivileges
	I1002 06:57:56.196100  566681 kubeadm.go:402] duration metric: took 16.3277794s to StartCluster
	I1002 06:57:56.196121  566681 settings.go:142] acquiring lock: {Name:mkde88de9cc28e670cb4891970fce50579712197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.196294  566681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:56.196768  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/kubeconfig: {Name:mkaba69145ae0ebd7ee7f396e649d41ddd82691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.197012  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:57:56.197039  566681 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:56.197157  566681 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:57:56.197305  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.197326  566681 addons.go:69] Setting ingress=true in profile "addons-535714"
	I1002 06:57:56.197323  566681 addons.go:69] Setting default-storageclass=true in profile "addons-535714"
	I1002 06:57:56.197353  566681 addons.go:238] Setting addon ingress=true in "addons-535714"
	I1002 06:57:56.197360  566681 addons.go:69] Setting registry=true in profile "addons-535714"
	I1002 06:57:56.197367  566681 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-535714"
	I1002 06:57:56.197376  566681 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-535714"
	I1002 06:57:56.197382  566681 addons.go:69] Setting volumesnapshots=true in profile "addons-535714"
	I1002 06:57:56.197391  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-535714"
	I1002 06:57:56.197393  566681 addons.go:69] Setting ingress-dns=true in profile "addons-535714"
	I1002 06:57:56.197397  566681 addons.go:238] Setting addon volumesnapshots=true in "addons-535714"
	I1002 06:57:56.197403  566681 addons.go:238] Setting addon ingress-dns=true in "addons-535714"
	I1002 06:57:56.197413  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197417  566681 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:57:56.197432  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197438  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197454  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197317  566681 addons.go:69] Setting gcp-auth=true in profile "addons-535714"
	I1002 06:57:56.197804  566681 addons.go:69] Setting metrics-server=true in profile "addons-535714"
	I1002 06:57:56.197813  566681 mustload.go:65] Loading cluster: addons-535714
	I1002 06:57:56.197822  566681 addons.go:238] Setting addon metrics-server=true in "addons-535714"
	I1002 06:57:56.197849  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197953  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197985  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197348  566681 addons.go:69] Setting cloud-spanner=true in profile "addons-535714"
	I1002 06:57:56.197995  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198002  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.198025  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198027  566681 addons.go:69] Setting inspektor-gadget=true in profile "addons-535714"
	I1002 06:57:56.198034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198040  566681 addons.go:238] Setting addon inspektor-gadget=true in "addons-535714"
	I1002 06:57:56.198051  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198062  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198075  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198080  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198105  566681 addons.go:69] Setting volcano=true in profile "addons-535714"
	I1002 06:57:56.198115  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198118  566681 addons.go:238] Setting addon volcano=true in "addons-535714"
	I1002 06:57:56.198121  566681 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198148  566681 addons.go:69] Setting registry-creds=true in profile "addons-535714"
	I1002 06:57:56.198149  566681 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-535714"
	I1002 06:57:56.198007  566681 addons.go:238] Setting addon cloud-spanner=true in "addons-535714"
	I1002 06:57:56.197369  566681 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198159  566681 addons.go:238] Setting addon registry-creds=true in "addons-535714"
	I1002 06:57:56.197383  566681 addons.go:238] Setting addon registry=true in "addons-535714"
	I1002 06:57:56.198168  566681 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-535714"
	I1002 06:57:56.197305  566681 addons.go:69] Setting yakd=true in profile "addons-535714"
	I1002 06:57:56.198174  566681 addons.go:69] Setting storage-provisioner=true in profile "addons-535714"
	I1002 06:57:56.198182  566681 addons.go:238] Setting addon yakd=true in "addons-535714"
	I1002 06:57:56.198188  566681 addons.go:238] Setting addon storage-provisioner=true in "addons-535714"
	I1002 06:57:56.197356  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-535714"
	I1002 06:57:56.197990  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198337  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198362  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198371  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198392  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198402  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198453  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198563  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198685  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198716  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198796  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198823  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198872  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198882  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198903  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.199225  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199278  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199496  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199602  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199605  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.199635  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.200717  566681 out.go:179] * Verifying Kubernetes components...
	I1002 06:57:56.203661  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:56.205590  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205627  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.205734  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205767  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.207434  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.207479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.210405  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.210443  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.213438  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.213479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.214017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.214056  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.232071  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1002 06:57:56.233110  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.234209  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.234234  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.234937  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.236013  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.236165  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.237450  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I1002 06:57:56.239323  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I1002 06:57:56.239414  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I1002 06:57:56.240034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240196  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240748  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.240776  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.240868  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I1002 06:57:56.240881  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.241379  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.241396  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.241535  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.242519  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.242540  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.242696  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.242735  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.242850  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.243325  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1002 06:57:56.243893  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.243945  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.244617  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.244654  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.245057  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.245890  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.245907  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.246010  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I1002 06:57:56.246033  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I1002 06:57:56.246568  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.247024  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.247099  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.247133  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.247421  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I1002 06:57:56.247710  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.247729  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.248188  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.248445  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.249846  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.250467  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.251029  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251054  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.251579  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251601  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.252078  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.252654  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.252734  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.255593  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.255986  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.256022  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.257178  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.257900  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.257951  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.258275  566681 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-535714"
	I1002 06:57:56.259770  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.259874  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.260317  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.260360  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.260738  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.260770  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.261307  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.261989  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.262034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.263359  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I1002 06:57:56.263562  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I1002 06:57:56.264010  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.264539  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.264559  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.265015  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.265220  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.268199  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38901
	I1002 06:57:56.268835  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.269385  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.269407  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.269800  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.272103  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.272173  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.272820  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.274630  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1002 06:57:56.275810  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I1002 06:57:56.275999  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I1002 06:57:56.276099  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I1002 06:57:56.276317  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I1002 06:57:56.276957  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277804  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.277826  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.277935  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277992  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.279294  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.279318  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.279418  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279522  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I1002 06:57:56.279526  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.280424  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280801  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280956  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:57:56.280961  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281067  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281080  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281248  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281259  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281396  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.280977  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281804  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.281870  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.282274  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282869  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.282901  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282927  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.282975  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.283442  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.284009  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.284202  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.284751  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:56.284768  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:57:56.284787  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.284857  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.284890  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.285017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.285054  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.288207  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.289274  566681 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:57:56.289290  566681 addons.go:238] Setting addon default-storageclass=true in "addons-535714"
	I1002 06:57:56.289364  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.289753  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.289797  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290042  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1002 06:57:56.290151  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.290556  566681 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:56.290578  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.290579  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:57:56.290609  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290771  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290990  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.291089  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I1002 06:57:56.291362  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291376  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.291505  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291516  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.292055  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.293244  566681 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:57:56.294939  566681 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:56.294996  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:57:56.295277  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.296317  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.296363  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.296433  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I1002 06:57:56.297190  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.297368  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.300772  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.300866  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.300946  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.300966  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.300983  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301003  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301026  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.301076  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I1002 06:57:56.301165  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.301203  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.301228  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1002 06:57:56.301400  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.301411  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.301454  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.301467  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.303443  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.303720  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.303466  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.303491  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.303762  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.303770  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.303776  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303526  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.303632  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.304435  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.304932  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.305291  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.305345  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.305464  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.305492  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1002 06:57:56.305495  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.305508  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.305577  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.305592  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	W1002 06:57:56.305630  566681 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 06:57:56.306621  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307189  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.307311  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.307383  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.307409  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307505  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.307540  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.307955  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.307981  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.308071  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.308163  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.308587  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.309033  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309057  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309132  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309293  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.309302  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309314  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309372  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.309533  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309698  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309703  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.309839  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310208  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310523  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.311044  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.311749  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.313557  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.316426  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1002 06:57:56.319293  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I1002 06:57:56.319454  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.319564  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I1002 06:57:56.319675  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I1002 06:57:56.319683  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.319813  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.320386  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.320405  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.320695  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.320492  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.321204  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.321258  566681 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:57:56.321684  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.321443  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I1002 06:57:56.321593  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.321816  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.322144  566681 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:57:56.322156  566681 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:57:56.323037  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.323050  566681 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:56.323066  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:57:56.323087  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.323146  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.323323  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.323337  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.324564  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:56.324583  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:57:56.324603  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.324892  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325026  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.325041  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.325304  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1002 06:57:56.325602  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325730  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.325892  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326132  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326261  566681 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:57:56.327284  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.327472  566681 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:57:56.327597  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:57:56.327623  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.328569  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.328642  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.328661  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329119  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.329383  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.329634  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.329665  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329932  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.330003  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.331010  566681 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:57:56.331650  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332245  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:57:56.332277  566681 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:57:56.332261  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332297  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.332372  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.333369  566681 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:57:56.333621  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.333646  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.333810  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.334276  566681 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:57:56.334843  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.335194  566681 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:56.335210  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:57:56.335228  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.335446  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.335655  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I1002 06:57:56.335851  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.336132  566681 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:57:56.336170  566681 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:57:56.336280  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.336440  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1002 06:57:56.336618  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.337098  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.338250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.338315  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.338584  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.338676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.338709  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.338721  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.339313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.339382  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.339452  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.339507  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.340336  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.340677  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.340657  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341043  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.341796  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.341865  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342040  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342263  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342431  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342440  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342454  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342502  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342595  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342614  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342621  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342695  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.342072  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343379  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343750  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.343817  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.343832  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.344562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.344702  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344753  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.344946  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.345322  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.345404  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345404  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.345548  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.345606  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345806  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.346007  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.346320  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.346590  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.346862  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1002 06:57:56.347602  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.347914  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.348757  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.348800  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.349261  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.349633  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.349706  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.350337  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:57:56.351587  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:57:56.351643  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.351655  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:57:56.352903  566681 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:57:56.352987  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:57:56.353046  566681 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:57:56.353092  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.352987  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:57:56.353974  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I1002 06:57:56.354300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I1002 06:57:56.354530  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:57:56.354545  566681 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:57:56.354562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.354607  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355031  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355314  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355362  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355747  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.355869  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:57:56.355907  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355921  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355982  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.356446  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.356686  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.358485  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:57:56.359466  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.359801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360238  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360272  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.360643  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360654  566681 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:56.360667  566681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:57:56.360676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360684  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.360847  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.360902  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360949  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.361063  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361261  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.361264  566681 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:57:56.361278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.361264  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:57:56.361448  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.361531  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361713  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.362047  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.363668  566681 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:57:56.363670  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:57:56.364768  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365172  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:56.365189  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:57:56.365208  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.365463  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.365492  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365867  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.366200  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.366332  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:57:56.366394  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.366567  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.367647  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:57:56.367669  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:57:56.367689  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.369424  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370073  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.370181  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370353  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.370354  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1002 06:57:56.370539  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.370710  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.370855  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.371120  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.371862  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.371993  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.372440  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.372590  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.372646  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.373687  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.373711  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.373884  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.374060  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.374270  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.374438  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.374887  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.376513  566681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:57:56.377878  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:56.377895  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:57:56.377926  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.381301  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.381862  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.381898  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.382058  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.382245  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.382379  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.382525  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	W1002 06:57:56.611250  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611293  566681 retry.go:31] will retry after 268.923212ms: ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	W1002 06:57:56.611372  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611378  566681 retry.go:31] will retry after 284.79555ms: ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:57.238066  566681 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:57:57.238093  566681 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:57:57.274258  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:57.291447  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:57.296644  566681 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:57.296665  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:57:57.317724  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:57.326760  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:57:57.326790  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:57:57.344388  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:57.359635  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:57:57.359666  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:57:57.391219  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:57.397913  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:57.466213  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:57.539770  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:57:57.539800  566681 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:57:57.565073  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:57:57.565109  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:57:57.626622  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.42956155s)
	I1002 06:57:57.626664  566681 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.422968545s)
	I1002 06:57:57.626751  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:57.626829  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:57:57.788309  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:57:57.788340  566681 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:57:57.863163  566681 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:57.863190  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:57:57.896903  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:57:57.896955  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:57:57.923302  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:58.011690  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:58.012981  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:58.110306  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:57:58.110346  566681 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:57:58.142428  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:57:58.142456  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:57:58.216082  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:57:58.216112  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:57:58.218768  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:58.222643  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.222669  566681 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:57:58.429860  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:57:58.429897  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:57:58.485954  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:57:58.485995  566681 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:57:58.501916  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:57:58.501955  566681 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:57:58.521314  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.818318  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:57:58.818357  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:57:58.833980  566681 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:58.834010  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:57:58.873392  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:58.873431  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:57:59.176797  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:59.186761  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:57:59.186798  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:57:59.305759  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:59.719259  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:57:59.719285  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 06:58:00.188246  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:58:00.188281  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:58:00.481133  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.20682266s)
	I1002 06:58:00.481238  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.481605  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.481667  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.481693  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.481705  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481717  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.482053  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.482070  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.482081  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.644178  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:58:00.644209  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:58:01.086809  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:58:01.086834  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:58:01.452986  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:01.453026  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:58:02.150700  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:02.601667  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.310178549s)
	I1002 06:58:02.601725  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.28395893s)
	I1002 06:58:02.601734  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601747  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601765  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601795  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601869  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.25743101s)
	I1002 06:58:02.601905  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601924  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601917  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.210665802s)
	I1002 06:58:02.601951  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601961  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602046  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602055  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602062  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602178  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602365  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602381  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602379  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602385  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602399  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602401  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602410  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602351  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602416  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602424  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602390  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602460  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602330  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602552  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602560  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602566  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602767  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602847  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.603001  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603018  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602869  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602869  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603276  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:03.763895  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:58:03.763944  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:03.767733  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768302  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:03.768333  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768654  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:03.768868  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:03.769064  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:03.769213  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:04.277228  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:58:04.505226  566681 addons.go:238] Setting addon gcp-auth=true in "addons-535714"
	I1002 06:58:04.505305  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:58:04.505781  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.505848  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.521300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1002 06:58:04.521841  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.522464  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.522494  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.522889  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.523576  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.523636  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.537716  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1002 06:58:04.538258  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.538728  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.538756  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.539153  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.539385  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:58:04.541614  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:58:04.541849  566681 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:58:04.541880  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:04.545872  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546401  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:04.546429  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546708  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:04.546895  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:04.547027  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:04.547194  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:05.770941  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.372950609s)
	I1002 06:58:05.771023  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771039  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771065  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.304816797s)
	I1002 06:58:05.771113  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771131  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771178  566681 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.1443973s)
	I1002 06:58:05.771222  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.144363906s)
	I1002 06:58:05.771258  566681 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 06:58:05.771308  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.847977896s)
	W1002 06:58:05.771333  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771355  566681 retry.go:31] will retry after 297.892327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771456  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758443398s)
	I1002 06:58:05.771481  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771490  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771540  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.759815099s)
	I1002 06:58:05.771573  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771575  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.552784974s)
	I1002 06:58:05.771584  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771595  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771611  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771719  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.250362363s)
	I1002 06:58:05.771747  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771759  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771942  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.771963  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772013  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772022  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772032  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772040  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772044  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772052  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772059  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772194  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772224  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772230  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772248  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772485  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772523  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772532  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772541  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772549  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772589  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772628  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772636  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772645  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772653  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772709  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772796  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773193  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.773210  566681 addons.go:479] Verifying addon registry=true in "addons-535714"
	I1002 06:58:05.773744  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.773810  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773834  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774038  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.774118  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774129  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772818  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772841  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774925  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774937  566681 addons.go:479] Verifying addon ingress=true in "addons-535714"
	I1002 06:58:05.772862  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775004  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775017  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.775024  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772880  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775347  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775380  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775386  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775394  566681 addons.go:479] Verifying addon metrics-server=true in "addons-535714"
	I1002 06:58:05.776348  566681 node_ready.go:35] waiting up to 6m0s for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.776980  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.776996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.776998  566681 out.go:179] * Verifying registry addon...
	I1002 06:58:05.779968  566681 out.go:179] * Verifying ingress addon...
	I1002 06:58:05.780767  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:58:05.782010  566681 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:58:05.829095  566681 node_ready.go:49] node "addons-535714" is "Ready"
	I1002 06:58:05.829146  566681 node_ready.go:38] duration metric: took 52.75602ms for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.829168  566681 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:58:05.829233  566681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:58:05.834443  566681 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:58:05.834466  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:05.835080  566681 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:58:05.835100  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:05.875341  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.875368  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.875751  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.875763  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.875778  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:58:05.875878  566681 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 06:58:05.909868  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.909898  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.910207  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.910270  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.910287  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.069811  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:06.216033  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.039174172s)
	W1002 06:58:06.216104  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216108  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.910297192s)
	I1002 06:58:06.216150  566681 retry.go:31] will retry after 161.340324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216192  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216210  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216504  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216542  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216549  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.216557  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216563  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216800  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216843  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.218514  566681 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-535714 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:58:06.294875  566681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-535714" context rescaled to 1 replicas
	I1002 06:58:06.324438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:06.327459  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.377937  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:58:06.794270  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.798170  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.296006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.297921  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.825812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.825866  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.904551  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.753782282s)
	I1002 06:58:07.904616  566681 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.362740219s)
	I1002 06:58:07.904661  566681 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.075410022s)
	I1002 06:58:07.904685  566681 api_server.go:72] duration metric: took 11.707614799s to wait for apiserver process to appear ...
	I1002 06:58:07.904692  566681 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:58:07.904618  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.904714  566681 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I1002 06:58:07.904746  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905650  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.905668  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.905673  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:07.905682  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.905697  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905988  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.906010  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.906023  566681 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:58:07.917720  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:58:07.917721  566681 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:58:07.919394  566681 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:58:07.920319  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:58:07.920611  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:58:07.920631  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:58:07.923712  566681 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I1002 06:58:07.935689  566681 api_server.go:141] control plane version: v1.34.1
	I1002 06:58:07.935726  566681 api_server.go:131] duration metric: took 31.026039ms to wait for apiserver health ...
	I1002 06:58:07.935739  566681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:58:07.938642  566681 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:58:07.938662  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:07.962863  566681 system_pods.go:59] 20 kube-system pods found
	I1002 06:58:07.962924  566681 system_pods.go:61] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:07.962934  566681 system_pods.go:61] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962944  566681 system_pods.go:61] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962951  566681 system_pods.go:61] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:07.962955  566681 system_pods.go:61] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:07.962959  566681 system_pods.go:61] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending
	I1002 06:58:07.962962  566681 system_pods.go:61] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:07.962965  566681 system_pods.go:61] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:07.962968  566681 system_pods.go:61] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:07.962973  566681 system_pods.go:61] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:07.962979  566681 system_pods.go:61] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:07.962983  566681 system_pods.go:61] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:07.962988  566681 system_pods.go:61] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:07.962994  566681 system_pods.go:61] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:07.962999  566681 system_pods.go:61] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:07.963005  566681 system_pods.go:61] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:07.963017  566681 system_pods.go:61] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:07.963022  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963031  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963036  566681 system_pods.go:61] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:07.963048  566681 system_pods.go:74] duration metric: took 27.298515ms to wait for pod list to return data ...
	I1002 06:58:07.963061  566681 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:58:07.979696  566681 default_sa.go:45] found service account: "default"
	I1002 06:58:07.979723  566681 default_sa.go:55] duration metric: took 16.655591ms for default service account to be created ...
	I1002 06:58:07.979733  566681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:58:08.050371  566681 system_pods.go:86] 20 kube-system pods found
	I1002 06:58:08.050407  566681 system_pods.go:89] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:08.050415  566681 system_pods.go:89] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050424  566681 system_pods.go:89] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050430  566681 system_pods.go:89] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:08.050438  566681 system_pods.go:89] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:08.050443  566681 system_pods.go:89] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:58:08.050449  566681 system_pods.go:89] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:08.050456  566681 system_pods.go:89] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:08.050463  566681 system_pods.go:89] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:08.050472  566681 system_pods.go:89] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:08.050477  566681 system_pods.go:89] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:08.050485  566681 system_pods.go:89] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:08.050493  566681 system_pods.go:89] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:08.050504  566681 system_pods.go:89] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:08.050512  566681 system_pods.go:89] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:08.050523  566681 system_pods.go:89] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:08.050528  566681 system_pods.go:89] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:08.050537  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050542  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050551  566681 system_pods.go:89] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:08.050567  566681 system_pods.go:126] duration metric: took 70.827007ms to wait for k8s-apps to be running ...
	I1002 06:58:08.050583  566681 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:58:08.050638  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:08.169874  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:58:08.169907  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:58:08.289577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.292025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.296361  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.296391  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:58:08.432642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:08.459596  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.795545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.796983  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.947651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.295174  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.296291  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.426575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.794891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.794937  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.929559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.288382  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.293181  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.428326  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.511821  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.441960114s)
	W1002 06:58:10.511871  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511903  566681 retry.go:31] will retry after 394.105371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511999  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.133998235s)
	I1002 06:58:10.512065  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512084  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512009  566681 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.461351775s)
	I1002 06:58:10.512151  566681 system_svc.go:56] duration metric: took 2.461548607s WaitForService to wait for kubelet
	I1002 06:58:10.512170  566681 kubeadm.go:586] duration metric: took 14.315097833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:58:10.512195  566681 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:58:10.512421  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512436  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.512445  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512451  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512808  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512831  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.525421  566681 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 06:58:10.525467  566681 node_conditions.go:123] node cpu capacity is 2
	I1002 06:58:10.525483  566681 node_conditions.go:105] duration metric: took 13.282233ms to run NodePressure ...
	I1002 06:58:10.525500  566681 start.go:241] waiting for startup goroutines ...
	I1002 06:58:10.876948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.878962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.907099  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:10.933831  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.474178987s)
	I1002 06:58:10.933902  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.933917  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934327  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934351  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.934363  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.934372  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934718  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934741  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.936073  566681 addons.go:479] Verifying addon gcp-auth=true in "addons-535714"
	I1002 06:58:10.939294  566681 out.go:179] * Verifying gcp-auth addon...
	I1002 06:58:10.941498  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:58:10.967193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.967643  566681 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:58:10.967661  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.291995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.292859  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.426822  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.449596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.787220  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.790007  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.927177  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.946352  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.291330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.291893  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.412988  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.505843996s)
	W1002 06:58:12.413060  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.413088  566681 retry.go:31] will retry after 830.72209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.425033  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.449434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.790923  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.792837  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.929132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.949344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.244514  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:13.289311  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.291334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.429008  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.453075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.786448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.787372  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.926128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.944808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.290787  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.291973  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.426597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.446124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.495404  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.250841467s)
	W1002 06:58:14.495476  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.495515  566681 retry.go:31] will retry after 993.52867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.787133  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.787363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.925480  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.947120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.288745  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.290247  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.426491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.446707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.489998  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:15.790203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.790718  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.926338  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.947762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.288050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.294216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.426315  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.448623  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.749674  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.259622296s)
	W1002 06:58:16.749739  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.749766  566681 retry.go:31] will retry after 685.893269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.784937  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.789418  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.924303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.945254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.286582  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.289258  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.429493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.436551  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:17.446130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.789304  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.789354  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.927192  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.947272  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.287684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.287964  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.425334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.446542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.793984  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.357370737s)
	W1002 06:58:18.794035  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.794058  566681 retry.go:31] will retry after 1.769505645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.818834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.819319  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.926250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.946166  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.286120  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.287299  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.427368  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.446296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.788860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.790575  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.926266  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.946838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.285631  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.286287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.426458  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.448700  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.563743  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:20.784983  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.792452  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.928439  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.946213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.354534  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.355101  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.424438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.447780  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.787792  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.788239  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.926313  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.946909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.986148  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.422343909s)
	W1002 06:58:21.986215  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:21.986241  566681 retry.go:31] will retry after 1.591159568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:22.479105  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.490010  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.490062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:22.490154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.785438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.785505  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.924097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.945260  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.287691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.288324  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.424675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.444770  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.578011  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:23.942123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.948294  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.948453  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.950791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.287641  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.287755  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.427062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.445753  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.646106  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.068053257s)
	W1002 06:58:24.646165  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.646192  566681 retry.go:31] will retry after 2.605552754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.785021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.786706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.924880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.945307  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.293097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.295253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.426401  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.448785  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.786965  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.789832  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.926383  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.947419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.286815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.424942  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.444763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.788540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.788706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.945896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.252378  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:27.285347  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.286330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.426765  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.444675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.783930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.785939  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.925152  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.946794  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:27.992201  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:27.992240  566681 retry.go:31] will retry after 8.383284602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:28.292474  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.293236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.449878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:28.785825  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.786277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.930557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.944934  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.288741  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.289425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.425596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.448825  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.791293  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.791772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.925493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.947040  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.289093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.289274  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.429043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.445086  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.787343  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.788106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.925916  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.945578  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.287772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.288130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.424173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.444911  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.839251  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.839613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.924537  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.945244  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.285593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.287197  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.428173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.445646  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.790722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.792545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.924044  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.948465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.287477  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.287815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.426173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.445002  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.789091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.789248  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.926672  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.945340  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.287879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.291550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.424476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.446160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.790769  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.793072  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.924896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.945667  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.523723  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.524500  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.524737  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.525162  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.790230  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.791831  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.924241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.944951  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.289627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.289977  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.375684  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:36.425592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.451074  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.785903  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.787679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.925288  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.947999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.311635  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.311959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.426029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.446091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.636801  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.261070571s)
	W1002 06:58:37.636852  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.636877  566681 retry.go:31] will retry after 12.088306464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.784365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.786077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.924729  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.287422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.288052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.424776  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.446043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.787364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.788336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.929977  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.952669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.285777  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.286130  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.425664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.445359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.791043  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.792332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.927261  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.949133  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.297847  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.298155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.508411  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.508530  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.790869  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.791640  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.926541  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.946409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.284335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.288282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.425342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.445476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.786456  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.787369  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.925788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.285122  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.289954  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.427812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.448669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.789086  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.793784  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.981476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.983793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.287301  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.287653  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.425089  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.446115  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.788762  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.788804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.925841  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.946154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.291446  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.291561  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.424642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.445497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.784807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.785666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.924223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.945793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.287330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.288804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.425720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.445387  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.784761  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.787219  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.925198  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.945101  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.287324  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.287453  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.425817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.785000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.786016  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.924786  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.944720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.284615  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.286350  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.424772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.444696  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.784801  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.786247  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.924675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.945863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.285254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.286071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.424850  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.444546  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.784736  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.787062  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.924609  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.285611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.286827  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.424821  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.444716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.726164  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:49.787775  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.787812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.924332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.945915  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:50.285693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.287323  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.425093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:50.457717  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.457755  566681 retry.go:31] will retry after 14.401076568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.785374  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.786592  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.924494  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.946113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.285309  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.424519  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.446358  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.785764  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.787620  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.924671  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.945518  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.284608  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.286328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.426252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.444955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.785415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.786501  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.924360  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.945603  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.286059  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.286081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.426061  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.445434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.784563  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.787018  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.926712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.285670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.286270  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.425263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.445015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.783971  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.785518  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.924652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.944701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.284095  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.285982  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.425045  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.445159  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.784789  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.785811  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.925024  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.945670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.284935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.286230  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.424865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.444979  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.784010  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.785095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.925082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.945267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.285037  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.290841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.423992  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.444492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.785708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.786647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.944543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.284397  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.286589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.424263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.446278  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.784592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.786223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.925275  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.945639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.284167  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.286213  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.424554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.446331  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.786351  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.786532  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.924799  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.944552  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.284593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.286147  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.427708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.446640  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.783993  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.786195  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.925109  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.945645  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.284268  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.286567  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.425880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.444926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.784751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.786669  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.924082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.945409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.285484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.287955  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.424588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.445328  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.785933  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.786611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.924311  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.945554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.284664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.286758  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.424558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.445443  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.785718  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.786015  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.924950  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.945320  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.285692  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.287456  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.423909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.445028  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.784417  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.785847  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.859977  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:04.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.944867  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:05.286410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.424815  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.444725  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:59:05.565727  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.565775  566681 retry.go:31] will retry after 12.962063584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.784083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.785399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.924301  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.945548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.284341  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.285025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.424577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.445930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.785592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.785777  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.924651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.944548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.284807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.286980  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.424593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.445604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.785681  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.786565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.924412  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.945298  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.284890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.285768  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.424422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.446875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.784632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.786747  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.924452  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.284701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.286699  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.424832  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.445005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.785080  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.923720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.944468  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.285848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.285877  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.425574  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.445229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.785800  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.788069  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.924958  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.945132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.284817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.286986  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.424693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.444335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.786755  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.788412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.924402  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.944935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.285499  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.285734  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.424709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.445959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.785549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.788041  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.924691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.944292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.285683  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.424754  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.445585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.784745  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.786053  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.925403  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.945860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.285184  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.286959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.424804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.446097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.791558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.791556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.927542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.949956  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.284639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.286617  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.426580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.446175  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.784496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.787071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.925830  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.945618  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.286160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:16.287392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.424973  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.446497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.789545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.790116  566681 kapi.go:107] duration metric: took 1m11.009348953s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 06:59:16.925187  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.947267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.287647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.426165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:17.450844  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.786988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.928406  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.027597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.293020  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.429378  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.449227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.528488  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:18.796448  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.929553  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.946292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.288404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.429199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.452666  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.792639  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.864991  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.336449949s)
	W1002 06:59:19.865069  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:19.865160  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865179  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865566  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:59:19.865575  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:59:19.865834  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:59:19.865969  566681 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:59:19.924481  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.945058  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.286730  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.424767  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.445496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.787056  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.925303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.945594  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.285610  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.424114  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.445438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.786589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.924253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.944783  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.285375  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.424724  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.445811  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.828328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.929492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.945629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.286455  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.424116  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.444871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.785953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.924350  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.945321  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.286907  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.424613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.445706  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.786265  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.925165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.944432  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.286899  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.424337  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.445373  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.786646  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.924121  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.944695  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.286707  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.425250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.445323  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.786287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.945489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.286403  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.424957  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.445376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.786820  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.924170  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.945197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.424241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.445542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.786419  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.945740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.286366  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.425536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.445687  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.788123  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.944760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.285395  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.425015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.445071  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.786362  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.925693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.945540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.286268  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.424296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.446123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.786155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.926684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.945375  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.286413  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.424180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.444838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.786253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.925151  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.944944  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.288748  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.425620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.445650  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.786358  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.924738  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.944757  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.285092  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.424998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.445067  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.786516  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.924306  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.945543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.286428  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.423533  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.445039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.785517  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.944555  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.286468  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.424778  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.444808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.785451  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.924018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.287660  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.424005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.445419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.785743  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.924870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.944575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.286370  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.424689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.444639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.786644  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.928760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.945529  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.286055  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.425011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.787058  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.924829  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.944865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.285681  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.424212  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.445570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.786536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.924039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.945611  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.286872  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.425081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.445160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.785854  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.924803  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.945395  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.286806  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.424531  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.445213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.785794  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.924199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.946416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.287223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.425005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.445179  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.786152  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.945545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.286313  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.425004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.445925  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.786682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.944902  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.286167  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.424932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.444879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.785378  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.925864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.945123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.286422  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.424954  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.445018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.786489  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.924425  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.945064  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.286244  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.425181  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.445110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.785417  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.923870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.944712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.287782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.424751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.444542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.786556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.924410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.945514  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.286856  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.424634  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.444823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.786341  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.925249  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.945585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.287532  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.427364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.449565  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.926679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.947416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.289682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.428232  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.445465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.787537  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.926415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.945253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.285757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.424433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.448251  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.785971  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.928422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.946461  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.286536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.452271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.786128  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.926032  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.946426  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.287601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.424345  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.445705  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.787096  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.924759  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.946688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.290180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.519704  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.519891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.926689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.946557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.286054  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.425914  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.447300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.785957  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.924030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.949871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.291565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.428120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.526092  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.786283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.933203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.952823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.290757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:58.425788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:58.445898  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.785286  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.135410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.135484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.289658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.424763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.444901  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.789990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.927768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.950570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.288666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.424489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.444995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.785712  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.928193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.945797  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.289874  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.429342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.447102  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.787399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.924633  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.944955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.288296  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.432709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.448119  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.788304  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.936551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.950283  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.291180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.429826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.446896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.789649  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.930297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.285728  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.423878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.445021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.785989  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.926604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.946365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.289629  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.424560  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.446580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.786184  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.925038  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.286414  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.425072  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.445415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.786235  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.924932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.945108  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.286318  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.425639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.445791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.787192  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.945680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.286388  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.424699  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.445180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.786177  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.927180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.945006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.285412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.424690  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.444685  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.787988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.926782  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.944680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.286385  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.425422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.445890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.785391  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.925292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.946110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.286953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.424926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.445097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.785990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.925536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.945882  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.286095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.426218  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.445400  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.787180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.924959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.945605  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.424843  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.445297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.786034  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.927087  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.945676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.286216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.424888  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.444768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.785283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.925300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.945536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.287658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.424359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.445282  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.785834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.924384  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.945604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.286392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.424670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.445327  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.786482  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.924913  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.944676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.286962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.428554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.445872  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.787125  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.924730  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.945508  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.286528  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.426864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.786434  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.926688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.945265  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.286255  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.425491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.446113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.925826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.946549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.286336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.424707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.444772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.785404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.925678  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.945252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.425487  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.447029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.786550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.945389  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.288156  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.425586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.446602  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.787696  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.924004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.286521  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.424493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.446224  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.786604  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.925118  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.945482  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.424632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.445848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.785791  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.927001  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.944907  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.288049  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.424875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.444559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.786767  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.925226  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.945050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.285958  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.426083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.444740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.787052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.945062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.285717  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.424050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.444966  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.787841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.924740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.945492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.286484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.424236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.445504  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.786601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.924551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.945948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.288423  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.424871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.445286  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.786695  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.926223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.945407  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.286021  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.425588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.445469  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.786883  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.926085  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.945814  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.287360  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.424981  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.445361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.787680  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.924556  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.945363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.288077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.425366  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.447433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.847272  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.946629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.946982  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.285658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.424106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.445538  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.787044  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.927886  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.944580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.290469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.425444  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.448620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.789282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.930009  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.948721  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.287469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.432852  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.446652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.788507  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.930180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.954772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.293484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.435262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.449271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.788843  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.928945  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.288443  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.427657  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.447716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.787995  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.933694  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.946106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.287636  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.427229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.446000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.788221  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.925863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.944669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:39.286808  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:39.425719  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:39.446011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.005533  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.011858  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.013227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.289216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.429330  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.446597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.788887  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.934361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.949590  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.288436  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.426586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.446712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.790082  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.926762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.948030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.286904  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.428171  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.447262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.787879  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.928999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.947900  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.289340  566681 kapi.go:107] duration metric: took 2m37.507327929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 07:00:43.426593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.445627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.927030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.946124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.426277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.445511  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.928128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.945892  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.424940  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.445245  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.925479  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.948084  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:46.427998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:46.446348  566681 kapi.go:107] duration metric: took 2m35.504841728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 07:00:46.448361  566681 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-535714 cluster.
	I1002 07:00:46.449772  566681 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 07:00:46.451121  566681 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 07:00:46.925947  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.429007  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.927793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.430587  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.930344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.428197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.928448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:50.425299  566681 kapi.go:107] duration metric: took 2m42.504972928s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 07:00:50.428467  566681 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, metrics-server, storage-provisioner, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 07:00:50.429978  566681 addons.go:514] duration metric: took 2m54.232824958s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin amd-gpu-device-plugin registry-creds metrics-server storage-provisioner storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 07:00:50.430050  566681 start.go:246] waiting for cluster config update ...
	I1002 07:00:50.430076  566681 start.go:255] writing updated cluster config ...
	I1002 07:00:50.430525  566681 ssh_runner.go:195] Run: rm -f paused
	I1002 07:00:50.439887  566681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:50.446240  566681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.451545  566681 pod_ready.go:94] pod "coredns-66bc5c9577-w7hjm" is "Ready"
	I1002 07:00:50.451589  566681 pod_ready.go:86] duration metric: took 5.295665ms for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.454257  566681 pod_ready.go:83] waiting for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.459251  566681 pod_ready.go:94] pod "etcd-addons-535714" is "Ready"
	I1002 07:00:50.459291  566681 pod_ready.go:86] duration metric: took 4.998226ms for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.463385  566681 pod_ready.go:83] waiting for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.473863  566681 pod_ready.go:94] pod "kube-apiserver-addons-535714" is "Ready"
	I1002 07:00:50.473899  566681 pod_ready.go:86] duration metric: took 10.481477ms for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.478391  566681 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.845519  566681 pod_ready.go:94] pod "kube-controller-manager-addons-535714" is "Ready"
	I1002 07:00:50.845556  566681 pod_ready.go:86] duration metric: took 367.127625ms for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.046035  566681 pod_ready.go:83] waiting for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.445054  566681 pod_ready.go:94] pod "kube-proxy-z495t" is "Ready"
	I1002 07:00:51.445095  566681 pod_ready.go:86] duration metric: took 399.024039ms for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.644949  566681 pod_ready.go:83] waiting for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045721  566681 pod_ready.go:94] pod "kube-scheduler-addons-535714" is "Ready"
	I1002 07:00:52.045756  566681 pod_ready.go:86] duration metric: took 400.769133ms for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045769  566681 pod_ready.go:40] duration metric: took 1.605821704s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:52.107681  566681 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 07:00:52.109482  566681 out.go:179] * Done! kubectl is now configured to use "addons-535714" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.673294872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388953673265480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=159c1a67-2d05-4f23-80c4-89111e1f232e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.673948863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3068b746-d632-42b9-be13-bccf12257c3a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.674004588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3068b746-d632-42b9-be13-bccf12257c3a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.674360947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3
fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1
d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3068b746-d632-42b9-be13-bccf12257c3a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.715160948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca612852-3834-4293-85e1-6772f429cf4c name=/runtime.v1.RuntimeService/Version
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.715234424Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca612852-3834-4293-85e1-6772f429cf4c name=/runtime.v1.RuntimeService/Version
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.716540310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9e33e19-36cd-4c7b-a153-3cf7261683fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.717681537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388953717653415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9e33e19-36cd-4c7b-a153-3cf7261683fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.718401263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a620939-f7ca-4e0f-b3fb-9921230b86ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.718460272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a620939-f7ca-4e0f-b3fb-9921230b86ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.718767274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3
fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1
d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a620939-f7ca-4e0f-b3fb-9921230b86ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.755610780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfd3c1f7-b053-4e96-82bb-32487c5a92f0 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.755703623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfd3c1f7-b053-4e96-82bb-32487c5a92f0 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.757616368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b55ebd31-feb8-4297-a562-783ec8cb3a14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.759379359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388953759351436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b55ebd31-feb8-4297-a562-783ec8cb3a14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.760173908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf8e45ac-dca7-44f3-8cdb-2be0ee7c0f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.760281847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf8e45ac-dca7-44f3-8cdb-2be0ee7c0f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.760604572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3
fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1
d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf8e45ac-dca7-44f3-8cdb-2be0ee7c0f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.798488757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b7d3820-8a92-45f7-abc4-8562158e72ab name=/runtime.v1.RuntimeService/Version
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.798575210Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b7d3820-8a92-45f7-abc4-8562158e72ab name=/runtime.v1.RuntimeService/Version
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.800052165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f969735-81f3-4126-ad94-b76293fbe7de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.801333545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388953801306785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f969735-81f3-4126-ad94-b76293fbe7de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.802104963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=513e887c-41d2-4246-95ce-ec0cc4e9f02b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.802178371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=513e887c-41d2-4246-95ce-ec0cc4e9f02b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:09:13 addons-535714 crio[827]: time="2025-10-02 07:09:13.802467441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757eaf3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3
fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1
d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=513e887c-41d2-4246-95ce-ec0cc4e9f02b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86667c9385b67       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   2fe38d26ed81e       busybox
	81f190fa89d8e       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             8 minutes ago       Running             controller                0                   2c0a4b75d16bb       ingress-nginx-controller-9cc49f96f-jcwrw
	2f84e33ebf14f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   9 minutes ago       Exited              patch                     0                   45c7f94d02bfb       ingress-nginx-admission-patch-46z2n
	5ce0b3e6c8fef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   9 minutes ago       Exited              create                    0                   13a0722f22fb7       ingress-nginx-admission-create-jsw7z
	d20e001ce5fa7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            9 minutes ago       Running             gadget                    0                   53cbb87b563ff       gadget-2hn79
	c68a602009da4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               10 minutes ago      Running             minikube-ingress-dns      0                   1239599eb3508       kube-ingress-dns-minikube
	0f29426982799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             11 minutes ago      Running             storage-provisioner       0                   348af25e84579       storage-provisioner
	58aa192645e96       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     11 minutes ago      Running             amd-gpu-device-plugin     0                   dba3c49629455       amd-gpu-device-plugin-f7qcs
	6e31cb36c4500       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             11 minutes ago      Running             coredns                   0                   4fcabfc373e60       coredns-66bc5c9577-w7hjm
	fb130499febb3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             11 minutes ago      Running             kube-proxy                0                   646600c8d86f7       kube-proxy-z495t
	466837c8cdfcc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             11 minutes ago      Running             etcd                      0                   c7d4e0eb984a2       etcd-addons-535714
	da8295539fc0e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             11 minutes ago      Running             kube-scheduler            0                   36d2846a22a84       kube-scheduler-addons-535714
	da58df3cad660       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             11 minutes ago      Running             kube-controller-manager   0                   63f4cb9d3437a       kube-controller-manager-addons-535714
	deaf436584a26       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             11 minutes ago      Running             kube-apiserver            0                   35f49d5f3b8fb       kube-apiserver-addons-535714
	
	
	==> coredns [6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb] <==
	[INFO] 10.244.0.7:35110 - 11487 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000105891s
	[INFO] 10.244.0.7:35110 - 31639 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000100284s
	[INFO] 10.244.0.7:35110 - 25746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080168s
	[INFO] 10.244.0.7:35110 - 43819 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000100728s
	[INFO] 10.244.0.7:35110 - 63816 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124028s
	[INFO] 10.244.0.7:35110 - 35022 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000129164s
	[INFO] 10.244.0.7:35110 - 28119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001725128s
	[INFO] 10.244.0.7:50584 - 36630 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148556s
	[INFO] 10.244.0.7:50584 - 36962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067971s
	[INFO] 10.244.0.7:37190 - 758 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052949s
	[INFO] 10.244.0.7:37190 - 1043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051809s
	[INFO] 10.244.0.7:37461 - 4143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057036s
	[INFO] 10.244.0.7:37461 - 4397 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049832s
	[INFO] 10.244.0.7:36180 - 39849 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111086s
	[INFO] 10.244.0.7:36180 - 40050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069757s
	[INFO] 10.244.0.23:54237 - 52266 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001020809s
	[INFO] 10.244.0.23:46188 - 47837 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000755825s
	[INFO] 10.244.0.23:50620 - 40298 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145474s
	[INFO] 10.244.0.23:46344 - 40921 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123896s
	[INFO] 10.244.0.23:50353 - 65439 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000272665s
	[INFO] 10.244.0.23:50633 - 23346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143762s
	[INFO] 10.244.0.23:52616 - 28857 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002777615s
	[INFO] 10.244.0.23:55533 - 44086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003112269s
	[INFO] 10.244.0.27:55844 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000811242s
	[INFO] 10.244.0.27:51921 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000498985s
	
	
	==> describe nodes <==
	Name:               addons-535714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-535714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-535714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-535714
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:57:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-535714
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:09:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    addons-535714
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 26ed18e3cae343e2ba2a85be4a0a7371
	  System UUID:                26ed18e3-cae3-43e2-ba2a-85be4a0a7371
	  Boot ID:                    73babc46-f812-4e67-b425-db513a204e97
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  gadget                      gadget-2hn79                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jcwrw    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 amd-gpu-device-plugin-f7qcs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-w7hjm                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-535714                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-535714                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-535714       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-z495t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-535714                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node addons-535714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node addons-535714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node addons-535714 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m   kubelet          Node addons-535714 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node addons-535714 event: Registered Node addons-535714 in Controller
	
	
	==> dmesg <==
	[  +1.779557] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 2 07:00] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.976810] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.109220] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.510995] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.560914] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.223140] kauditd_printk_skb: 56 callbacks suppressed
	[Oct 2 07:01] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.884695] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.185211] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.060908] kauditd_printk_skb: 58 callbacks suppressed
	[Oct 2 07:02] kauditd_printk_skb: 10 callbacks suppressed
	[  +1.331616] kauditd_printk_skb: 17 callbacks suppressed
	[  +2.250929] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.000032] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 2 07:03] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.099939] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.783953] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 2 07:06] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000320] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 2 07:08] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.251210] kauditd_printk_skb: 10 callbacks suppressed
	[ +31.181499] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2] <==
	{"level":"warn","ts":"2025-10-02T06:59:59.121300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.835357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:59.121339Z","caller":"traceutil/trace.go:172","msg":"trace[1316712396] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"183.87568ms","start":"2025-10-02T06:59:58.937457Z","end":"2025-10-02T06:59:59.121332Z","steps":["trace[1316712396] 'agreement among raft nodes before linearized reading'  (duration: 183.815946ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:32.832647Z","caller":"traceutil/trace.go:172","msg":"trace[1453851995] linearizableReadLoop","detail":"{readStateIndex:1231; appliedIndex:1231; }","duration":"220.066962ms","start":"2025-10-02T07:00:32.612509Z","end":"2025-10-02T07:00:32.832576Z","steps":["trace[1453851995] 'read index received'  (duration: 220.05963ms)","trace[1453851995] 'applied index is now lower than readState.Index'  (duration: 6.189µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T07:00:32.832730Z","caller":"traceutil/trace.go:172","msg":"trace[302351669] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"243.94686ms","start":"2025-10-02T07:00:32.588772Z","end":"2025-10-02T07:00:32.832719Z","steps":["trace[302351669] 'process raft request'  (duration: 243.833114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:32.832967Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.479862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-02T07:00:32.833001Z","caller":"traceutil/trace.go:172","msg":"trace[1089606970] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1180; }","duration":"220.525584ms","start":"2025-10-02T07:00:32.612469Z","end":"2025-10-02T07:00:32.832995Z","steps":["trace[1089606970] 'agreement among raft nodes before linearized reading'  (duration: 220.422716ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.990824Z","caller":"traceutil/trace.go:172","msg":"trace[1822440841] linearizableReadLoop","detail":"{readStateIndex:1259; appliedIndex:1259; }","duration":"216.288139ms","start":"2025-10-02T07:00:39.774473Z","end":"2025-10-02T07:00:39.990762Z","steps":["trace[1822440841] 'read index received'  (duration: 216.279919ms)","trace[1822440841] 'applied index is now lower than readState.Index'  (duration: 6.642µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:00:39.991358Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.077704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:00:39.991456Z","caller":"traceutil/trace.go:172","msg":"trace[1082597067] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1206; }","duration":"217.190679ms","start":"2025-10-02T07:00:39.774258Z","end":"2025-10-02T07:00:39.991449Z","steps":["trace[1082597067] 'agreement among raft nodes before linearized reading'  (duration: 216.738402ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.992313Z","caller":"traceutil/trace.go:172","msg":"trace[515400758] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"337.963385ms","start":"2025-10-02T07:00:39.654341Z","end":"2025-10-02T07:00:39.992305Z","steps":["trace[515400758] 'process raft request'  (duration: 337.312964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:39.992477Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.654280Z","time spent":"338.099015ms","remote":"127.0.0.1:56776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1205 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-02T07:00:39.994757Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.655974Z","time spent":"338.780211ms","remote":"127.0.0.1:56512","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-10-02T07:02:18.249354Z","caller":"traceutil/trace.go:172","msg":"trace[1937839981] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"110.209012ms","start":"2025-10-02T07:02:18.139042Z","end":"2025-10-02T07:02:18.249251Z","steps":["trace[1937839981] 'process raft request'  (duration: 107.760601ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:02:25.358154Z","caller":"traceutil/trace.go:172","msg":"trace[1514029901] linearizableReadLoop","detail":"{readStateIndex:1683; appliedIndex:1683; }","duration":"269.707219ms","start":"2025-10-02T07:02:25.088427Z","end":"2025-10-02T07:02:25.358135Z","steps":["trace[1514029901] 'read index received'  (duration: 269.698824ms)","trace[1514029901] 'applied index is now lower than readState.Index'  (duration: 7.137µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:02:25.358835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.337456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.358908Z","caller":"traceutil/trace.go:172","msg":"trace[129833481] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1605; }","duration":"270.47424ms","start":"2025-10-02T07:02:25.088423Z","end":"2025-10-02T07:02:25.358898Z","steps":["trace[129833481] 'agreement among raft nodes before linearized reading'  (duration: 270.303097ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.361904Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.156634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.361957Z","caller":"traceutil/trace.go:172","msg":"trace[228810763] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:1605; }","duration":"257.224721ms","start":"2025-10-02T07:02:25.104724Z","end":"2025-10-02T07:02:25.361949Z","steps":["trace[228810763] 'agreement among raft nodes before linearized reading'  (duration: 257.141662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.13527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363670Z","caller":"traceutil/trace.go:172","msg":"trace[2116337020] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1606; }","duration":"129.197912ms","start":"2025-10-02T07:02:25.234464Z","end":"2025-10-02T07:02:25.363662Z","steps":["trace[2116337020] 'agreement among raft nodes before linearized reading'  (duration: 129.113844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.575698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363939Z","caller":"traceutil/trace.go:172","msg":"trace[2132272707] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1606; }","duration":"192.616449ms","start":"2025-10-02T07:02:25.171317Z","end":"2025-10-02T07:02:25.363933Z","steps":["trace[2132272707] 'agreement among raft nodes before linearized reading'  (duration: 192.563634ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:07:46.437056Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1714}
	{"level":"info","ts":"2025-10-02T07:07:46.499568Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1714,"took":"60.828637ms","hash":1204393910,"current-db-size-bytes":5812224,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3612672,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-10-02T07:07:46.499630Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1204393910,"revision":1714,"compact-revision":-1}
	
	
	==> kernel <==
	 07:09:14 up 11 min,  0 users,  load average: 0.29, 0.71, 0.66
	Linux addons-535714 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68] <==
	E1002 07:07:54.831164       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:55.840848       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:56.848440       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:57.857584       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:58.865881       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:59.875606       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:00.882643       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:01.890990       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:02.898772       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:03.911372       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:04.925124       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:05.933262       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1002 07:08:43.532980       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:08:43.533954       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:08:43.637270       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:08:43.637380       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:08:43.658115       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:08:43.658609       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:08:43.689479       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:08:43.689591       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:08:43.708861       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:08:43.708891       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 07:08:44.690187       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 07:08:44.709321       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1002 07:08:44.752827       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20] <==
	E1002 07:08:47.539142       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:08:47.540222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:08:48.459251       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:08:48.460477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:08:48.514239       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:08:48.515518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:08:51.242033       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:08:51.243174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:08:52.646855       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:08:52.647864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:08:53.894308       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:08:53.896570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1002 07:08:54.261301       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^c8c4905e-9f5d-11f0-96f3-e64440f40013" nodeName="addons-535714" scheduledPods=["default/task-pv-pod"]
	E1002 07:08:54.834032       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I1002 07:08:55.059182       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 07:08:55.059242       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:08:55.086508       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 07:08:55.086547       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 07:08:59.868494       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:08:59.869614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:09:01.361657       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:09:01.362759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:09:02.051750       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:09:02.052997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:09:09.834449       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b] <==
	I1002 06:57:56.940558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:57:57.042011       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:57:57.042117       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.164"]
	E1002 06:57:57.042205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:57:57.167383       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 06:57:57.167427       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 06:57:57.167460       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:57:57.190949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:57:57.192886       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:57:57.192902       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:57:57.294325       1 config.go:200] "Starting service config controller"
	I1002 06:57:57.294358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:57:57.294429       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:57:57.294434       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:57:57.294455       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:57:57.294459       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:57:57.438397       1 config.go:309] "Starting node config controller"
	I1002 06:57:57.441950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:57:57.479963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:57:57.494463       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:57:57.494530       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:57:57.494543       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca] <==
	E1002 06:57:47.853654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:57:47.853709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:47.853767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:57:47.853824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:57:47.854040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:47.855481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:47.854491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.707149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:57:48.761606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:57:48.783806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:57:48.817274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:57:48.856898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:48.856969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 06:57:48.860214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:57:48.880906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.896863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:48.913429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:57:48.964287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:57:48.985241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:57:49.005874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:57:49.118344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:57:49.123456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:49.157781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:57:49.202768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 06:57:51.042340       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:08:46 addons-535714 kubelet[1509]: I1002 07:08:46.791388    1509 scope.go:117] "RemoveContainer" containerID="46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1"
	Oct 02 07:08:46 addons-535714 kubelet[1509]: I1002 07:08:46.791953    1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1"} err="failed to get container status \"46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1\": rpc error: code = NotFound desc = could not find container \"46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1\": container with ID starting with 46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1 not found: ID does not exist"
	Oct 02 07:08:46 addons-535714 kubelet[1509]: I1002 07:08:46.791987    1509 scope.go:117] "RemoveContainer" containerID="24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd"
	Oct 02 07:08:46 addons-535714 kubelet[1509]: I1002 07:08:46.907232    1509 scope.go:117] "RemoveContainer" containerID="24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd"
	Oct 02 07:08:46 addons-535714 kubelet[1509]: E1002 07:08:46.907880    1509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd\": container with ID starting with 24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd not found: ID does not exist" containerID="24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd"
	Oct 02 07:08:46 addons-535714 kubelet[1509]: I1002 07:08:46.908022    1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd"} err="failed to get container status \"24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd\": rpc error: code = NotFound desc = could not find container \"24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd\": container with ID starting with 24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd not found: ID does not exist"
	Oct 02 07:08:46 addons-535714 kubelet[1509]: I1002 07:08:46.908046    1509 scope.go:117] "RemoveContainer" containerID="3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149"
	Oct 02 07:08:47 addons-535714 kubelet[1509]: I1002 07:08:47.025382    1509 scope.go:117] "RemoveContainer" containerID="3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149"
	Oct 02 07:08:47 addons-535714 kubelet[1509]: E1002 07:08:47.026279    1509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149\": container with ID starting with 3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149 not found: ID does not exist" containerID="3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149"
	Oct 02 07:08:47 addons-535714 kubelet[1509]: I1002 07:08:47.026324    1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149"} err="failed to get container status \"3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149\": rpc error: code = NotFound desc = could not find container \"3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149\": container with ID starting with 3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149 not found: ID does not exist"
	Oct 02 07:08:47 addons-535714 kubelet[1509]: I1002 07:08:47.178583    1509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a933762-fa4f-4072-8b4b-d8b6c46d4f7e" path="/var/lib/kubelet/pods/1a933762-fa4f-4072-8b4b-d8b6c46d4f7e/volumes"
	Oct 02 07:08:47 addons-535714 kubelet[1509]: I1002 07:08:47.178916    1509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27de7994-2f0d-4f74-a4f7-7e22d4971553" path="/var/lib/kubelet/pods/27de7994-2f0d-4f74-a4f7-7e22d4971553/volumes"
	Oct 02 07:08:47 addons-535714 kubelet[1509]: I1002 07:08:47.179283    1509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914e6ab5-a344-4664-a33a-b4909c1b7903" path="/var/lib/kubelet/pods/914e6ab5-a344-4664-a33a-b4909c1b7903/volumes"
	Oct 02 07:08:47 addons-535714 kubelet[1509]: I1002 07:08:47.179832    1509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f552d1e8-79a8-4bf6-be47-26aa19781b53" path="/var/lib/kubelet/pods/f552d1e8-79a8-4bf6-be47-26aa19781b53/volumes"
	Oct 02 07:08:50 addons-535714 kubelet[1509]: W1002 07:08:50.107977    1509 logging.go:55] [core] [Channel #65 SubChannel #66]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 02 07:08:50 addons-535714 kubelet[1509]: E1002 07:08:50.174937    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:08:51 addons-535714 kubelet[1509]: E1002 07:08:51.800948    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388931800607202  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:51 addons-535714 kubelet[1509]: E1002 07:08:51.800971    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388931800607202  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:54 addons-535714 kubelet[1509]: E1002 07:08:54.173865    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2f677461-445c-4e2a-aeaa-28f894f29b0b"
	Oct 02 07:08:56 addons-535714 kubelet[1509]: I1002 07:08:56.174154    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-f7qcs" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:09:01 addons-535714 kubelet[1509]: E1002 07:09:01.803911    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388941803521720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:09:01 addons-535714 kubelet[1509]: E1002 07:09:01.803957    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388941803521720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:09:08 addons-535714 kubelet[1509]: E1002 07:09:08.173685    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2f677461-445c-4e2a-aeaa-28f894f29b0b"
	Oct 02 07:09:11 addons-535714 kubelet[1509]: E1002 07:09:11.807894    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388951806999691  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:09:11 addons-535714 kubelet[1509]: E1002 07:09:11.807922    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388951806999691  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	
	
	==> storage-provisioner [0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0] <==
	W1002 07:08:49.243847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:51.247308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:51.252521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:53.256668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:53.265154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:55.271306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:55.275997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:57.279336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:57.287342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:59.291657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:59.296976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:01.300672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:01.309304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:03.313311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:03.318546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:05.322835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:05.327207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:07.332234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:07.341469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:09.344570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:09.351631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:11.355455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:11.367417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:13.370774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:09:13.375498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
helpers_test.go:269: (dbg) Run:  kubectl --context addons-535714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n: exit status 1 (88.068437ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:01:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxhkh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jxhkh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  8m3s                 default-scheduler  Successfully assigned default/nginx to addons-535714
	  Warning  Failed     6m18s (x2 over 7m)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     93s (x4 over 7m)     kubelet            Error: ErrImagePull
	  Warning  Failed     93s (x2 over 3m34s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    25s (x9 over 6m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     25s (x9 over 6m59s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    10s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:02:40 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-znf77 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-znf77:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m35s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-535714
	  Normal   Pulling    2m3s (x3 over 6m34s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     33s (x3 over 4m34s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     33s (x3 over 4m34s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x4 over 4m33s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     7s (x4 over 4m33s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g48lf (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-g48lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jsw7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-46z2n" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 addons disable ingress-dns --alsologtostderr -v=1: (1.206748639s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 addons disable ingress --alsologtostderr -v=1: (7.845332174s)
--- FAIL: TestAddons/parallel/Ingress (492.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (384.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 07:02:26.837879  566080 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 07:02:26.851196  566080 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 07:02:26.851233  566080 kapi.go:107] duration metric: took 13.370173ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 13.386062ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-535714 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-535714 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2f677461-445c-4e2a-aeaa-28f894f29b0b] Pending
helpers_test.go:352: "task-pv-pod" [2f677461-445c-4e2a-aeaa-28f894f29b0b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-02 07:08:40.475347344 +0000 UTC m=+698.643326132
addons_test.go:567: (dbg) Run:  kubectl --context addons-535714 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-535714 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-535714/192.168.39.164
Start Time:       Thu, 02 Oct 2025 07:02:40 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-znf77 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-znf77:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-535714
Warning  Failed     118s (x2 over 3m59s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     118s (x2 over 3m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    103s (x2 over 3m58s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     103s (x2 over 3m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    88s (x3 over 5m59s)   kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-535714 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-535714 logs task-pv-pod -n default: exit status 1 (73.973634ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-535714 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-535714 -n addons-535714
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 logs -n 25: (1.453903128s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ -o=json --download-only -p download-only-169608 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ --download-only -p binary-mirror-257523 --alsologtostderr --binary-mirror http://127.0.0.1:33567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ -p binary-mirror-257523                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ addons  │ enable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ start   │ -p addons-535714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ enable headlamp -p addons-535714 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ ip      │ addons-535714 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │ 02 Oct 25 07:03 UTC │
	│ addons  │ addons-535714 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │ 02 Oct 25 07:03 UTC │
	│ addons  │ addons-535714 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:57:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:57:12.613104  566681 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:57:12.613401  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613412  566681 out.go:374] Setting ErrFile to fd 2...
	I1002 06:57:12.613416  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613691  566681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 06:57:12.614327  566681 out.go:368] Setting JSON to false
	I1002 06:57:12.615226  566681 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49183,"bootTime":1759339050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:57:12.615318  566681 start.go:140] virtualization: kvm guest
	I1002 06:57:12.616912  566681 out.go:179] * [addons-535714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:57:12.618030  566681 notify.go:220] Checking for updates...
	I1002 06:57:12.618070  566681 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:57:12.619267  566681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:57:12.620404  566681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:12.621815  566681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:12.622922  566681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:57:12.623998  566681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:57:12.625286  566681 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:57:12.655279  566681 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 06:57:12.656497  566681 start.go:304] selected driver: kvm2
	I1002 06:57:12.656511  566681 start.go:924] validating driver "kvm2" against <nil>
	I1002 06:57:12.656523  566681 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:57:12.657469  566681 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.657563  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.671466  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.671499  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.684735  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.684785  566681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:57:12.685037  566681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:57:12.685069  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:12.685110  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:12.685121  566681 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 06:57:12.685226  566681 start.go:348] cluster config:
	{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1002 06:57:12.685336  566681 iso.go:125] acquiring lock: {Name:mkf098c9edb59acf17bed04e42333d4ed092b943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.687549  566681 out.go:179] * Starting "addons-535714" primary control-plane node in "addons-535714" cluster
	I1002 06:57:12.688758  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:12.688809  566681 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:57:12.688824  566681 cache.go:58] Caching tarball of preloaded images
	I1002 06:57:12.688927  566681 preload.go:233] Found /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:57:12.688941  566681 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:57:12.689355  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:12.689385  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json: {Name:mkd226c1b0f282f7928061e8123511cda66ecb61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:12.689560  566681 start.go:360] acquireMachinesLock for addons-535714: {Name:mk200887a2360c0adfa27edc65d8cb08bb2838a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 06:57:12.689631  566681 start.go:364] duration metric: took 53.377µs to acquireMachinesLock for "addons-535714"
	I1002 06:57:12.689654  566681 start.go:93] Provisioning new machine with config: &{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:12.689738  566681 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 06:57:12.691999  566681 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 06:57:12.692183  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:12.692244  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:12.705101  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I1002 06:57:12.705724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:12.706300  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:12.706320  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:12.706770  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:12.707010  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:12.707209  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:12.707401  566681 start.go:159] libmachine.API.Create for "addons-535714" (driver="kvm2")
	I1002 06:57:12.707450  566681 client.go:168] LocalClient.Create starting
	I1002 06:57:12.707494  566681 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem
	I1002 06:57:12.888250  566681 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem
	I1002 06:57:13.081005  566681 main.go:141] libmachine: Running pre-create checks...
	I1002 06:57:13.081030  566681 main.go:141] libmachine: (addons-535714) Calling .PreCreateCheck
	I1002 06:57:13.081598  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:13.082053  566681 main.go:141] libmachine: Creating machine...
	I1002 06:57:13.082069  566681 main.go:141] libmachine: (addons-535714) Calling .Create
	I1002 06:57:13.082276  566681 main.go:141] libmachine: (addons-535714) creating domain...
	I1002 06:57:13.082300  566681 main.go:141] libmachine: (addons-535714) creating network...
	I1002 06:57:13.083762  566681 main.go:141] libmachine: (addons-535714) DBG | found existing default network
	I1002 06:57:13.084004  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084021  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>default</name>
	I1002 06:57:13.084029  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 06:57:13.084036  566681 main.go:141] libmachine: (addons-535714) DBG |   <forward mode='nat'>
	I1002 06:57:13.084041  566681 main.go:141] libmachine: (addons-535714) DBG |     <nat>
	I1002 06:57:13.084047  566681 main.go:141] libmachine: (addons-535714) DBG |       <port start='1024' end='65535'/>
	I1002 06:57:13.084051  566681 main.go:141] libmachine: (addons-535714) DBG |     </nat>
	I1002 06:57:13.084055  566681 main.go:141] libmachine: (addons-535714) DBG |   </forward>
	I1002 06:57:13.084061  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 06:57:13.084068  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 06:57:13.084084  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 06:57:13.084098  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084111  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 06:57:13.084123  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084131  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084152  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084191  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084749  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.084601  566709 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1002 06:57:13.084771  566681 main.go:141] libmachine: (addons-535714) DBG | defining private network:
	I1002 06:57:13.084780  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084785  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084801  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.084820  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.084831  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.084840  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084851  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.084861  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084868  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084878  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084888  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.090767  566681 main.go:141] libmachine: (addons-535714) DBG | creating private network mk-addons-535714 192.168.39.0/24...
	I1002 06:57:13.158975  566681 main.go:141] libmachine: (addons-535714) DBG | private network mk-addons-535714 192.168.39.0/24 created
	I1002 06:57:13.159275  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.159307  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.159316  566681 main.go:141] libmachine: (addons-535714) setting up store path in /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.159335  566681 main.go:141] libmachine: (addons-535714) building disk image from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 06:57:13.159343  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>30f68bcb-0ec3-45ac-9012-251c5feb215b</uuid>
	I1002 06:57:13.159350  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 06:57:13.159356  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:03:a3:ce'/>
	I1002 06:57:13.159360  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.159383  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.159402  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.159413  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.159428  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.159461  566681 main.go:141] libmachine: (addons-535714) Downloading /home/jenkins/minikube-integration/21643-562157/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 06:57:13.159477  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.159489  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.159500  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.159522  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.159293  566709 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.427161  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.426986  566709 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa...
	I1002 06:57:13.691596  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691434  566709 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk...
	I1002 06:57:13.691620  566681 main.go:141] libmachine: (addons-535714) DBG | Writing magic tar header
	I1002 06:57:13.691651  566681 main.go:141] libmachine: (addons-535714) DBG | Writing SSH key tar header
	I1002 06:57:13.691660  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691559  566709 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.691671  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714
	I1002 06:57:13.691678  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines
	I1002 06:57:13.691687  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.691694  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157
	I1002 06:57:13.691702  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 06:57:13.691710  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins
	I1002 06:57:13.691724  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 (perms=drwx------)
	I1002 06:57:13.691738  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines (perms=drwxr-xr-x)
	I1002 06:57:13.691747  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home
	I1002 06:57:13.691758  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube (perms=drwxr-xr-x)
	I1002 06:57:13.691769  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157 (perms=drwxrwxr-x)
	I1002 06:57:13.691781  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 06:57:13.691789  566681 main.go:141] libmachine: (addons-535714) DBG | skipping /home - not owner
	I1002 06:57:13.691803  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 06:57:13.691811  566681 main.go:141] libmachine: (addons-535714) defining domain...
	I1002 06:57:13.693046  566681 main.go:141] libmachine: (addons-535714) defining domain using XML: 
	I1002 06:57:13.693074  566681 main.go:141] libmachine: (addons-535714) <domain type='kvm'>
	I1002 06:57:13.693080  566681 main.go:141] libmachine: (addons-535714)   <name>addons-535714</name>
	I1002 06:57:13.693085  566681 main.go:141] libmachine: (addons-535714)   <memory unit='MiB'>4096</memory>
	I1002 06:57:13.693090  566681 main.go:141] libmachine: (addons-535714)   <vcpu>2</vcpu>
	I1002 06:57:13.693093  566681 main.go:141] libmachine: (addons-535714)   <features>
	I1002 06:57:13.693098  566681 main.go:141] libmachine: (addons-535714)     <acpi/>
	I1002 06:57:13.693102  566681 main.go:141] libmachine: (addons-535714)     <apic/>
	I1002 06:57:13.693109  566681 main.go:141] libmachine: (addons-535714)     <pae/>
	I1002 06:57:13.693115  566681 main.go:141] libmachine: (addons-535714)   </features>
	I1002 06:57:13.693124  566681 main.go:141] libmachine: (addons-535714)   <cpu mode='host-passthrough'>
	I1002 06:57:13.693132  566681 main.go:141] libmachine: (addons-535714)   </cpu>
	I1002 06:57:13.693155  566681 main.go:141] libmachine: (addons-535714)   <os>
	I1002 06:57:13.693163  566681 main.go:141] libmachine: (addons-535714)     <type>hvm</type>
	I1002 06:57:13.693172  566681 main.go:141] libmachine: (addons-535714)     <boot dev='cdrom'/>
	I1002 06:57:13.693186  566681 main.go:141] libmachine: (addons-535714)     <boot dev='hd'/>
	I1002 06:57:13.693192  566681 main.go:141] libmachine: (addons-535714)     <bootmenu enable='no'/>
	I1002 06:57:13.693197  566681 main.go:141] libmachine: (addons-535714)   </os>
	I1002 06:57:13.693202  566681 main.go:141] libmachine: (addons-535714)   <devices>
	I1002 06:57:13.693207  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='cdrom'>
	I1002 06:57:13.693215  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.693220  566681 main.go:141] libmachine: (addons-535714)       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.693225  566681 main.go:141] libmachine: (addons-535714)       <readonly/>
	I1002 06:57:13.693231  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693240  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='disk'>
	I1002 06:57:13.693255  566681 main.go:141] libmachine: (addons-535714)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 06:57:13.693309  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.693334  566681 main.go:141] libmachine: (addons-535714)       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.693341  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693357  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693371  566681 main.go:141] libmachine: (addons-535714)       <source network='mk-addons-535714'/>
	I1002 06:57:13.693378  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693391  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693399  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693411  566681 main.go:141] libmachine: (addons-535714)       <source network='default'/>
	I1002 06:57:13.693416  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693435  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693445  566681 main.go:141] libmachine: (addons-535714)     <serial type='pty'>
	I1002 06:57:13.693480  566681 main.go:141] libmachine: (addons-535714)       <target port='0'/>
	I1002 06:57:13.693520  566681 main.go:141] libmachine: (addons-535714)     </serial>
	I1002 06:57:13.693540  566681 main.go:141] libmachine: (addons-535714)     <console type='pty'>
	I1002 06:57:13.693552  566681 main.go:141] libmachine: (addons-535714)       <target type='serial' port='0'/>
	I1002 06:57:13.693564  566681 main.go:141] libmachine: (addons-535714)     </console>
	I1002 06:57:13.693575  566681 main.go:141] libmachine: (addons-535714)     <rng model='virtio'>
	I1002 06:57:13.693588  566681 main.go:141] libmachine: (addons-535714)       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.693598  566681 main.go:141] libmachine: (addons-535714)     </rng>
	I1002 06:57:13.693609  566681 main.go:141] libmachine: (addons-535714)   </devices>
	I1002 06:57:13.693618  566681 main.go:141] libmachine: (addons-535714) </domain>
	I1002 06:57:13.693631  566681 main.go:141] libmachine: (addons-535714) 
	I1002 06:57:13.698471  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:ff:9b:2c in network default
	I1002 06:57:13.699181  566681 main.go:141] libmachine: (addons-535714) starting domain...
	I1002 06:57:13.699210  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:13.699219  566681 main.go:141] libmachine: (addons-535714) ensuring networks are active...
	I1002 06:57:13.699886  566681 main.go:141] libmachine: (addons-535714) Ensuring network default is active
	I1002 06:57:13.700240  566681 main.go:141] libmachine: (addons-535714) Ensuring network mk-addons-535714 is active
	I1002 06:57:13.700911  566681 main.go:141] libmachine: (addons-535714) getting domain XML...
	I1002 06:57:13.701998  566681 main.go:141] libmachine: (addons-535714) DBG | starting domain XML:
	I1002 06:57:13.702019  566681 main.go:141] libmachine: (addons-535714) DBG | <domain type='kvm'>
	I1002 06:57:13.702029  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>addons-535714</name>
	I1002 06:57:13.702036  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>26ed18e3-cae3-43e2-ba2a-85be4a0a7371</uuid>
	I1002 06:57:13.702049  566681 main.go:141] libmachine: (addons-535714) DBG |   <memory unit='KiB'>4194304</memory>
	I1002 06:57:13.702060  566681 main.go:141] libmachine: (addons-535714) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1002 06:57:13.702069  566681 main.go:141] libmachine: (addons-535714) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 06:57:13.702075  566681 main.go:141] libmachine: (addons-535714) DBG |   <os>
	I1002 06:57:13.702085  566681 main.go:141] libmachine: (addons-535714) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 06:57:13.702093  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='cdrom'/>
	I1002 06:57:13.702101  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='hd'/>
	I1002 06:57:13.702116  566681 main.go:141] libmachine: (addons-535714) DBG |     <bootmenu enable='no'/>
	I1002 06:57:13.702127  566681 main.go:141] libmachine: (addons-535714) DBG |   </os>
	I1002 06:57:13.702134  566681 main.go:141] libmachine: (addons-535714) DBG |   <features>
	I1002 06:57:13.702180  566681 main.go:141] libmachine: (addons-535714) DBG |     <acpi/>
	I1002 06:57:13.702204  566681 main.go:141] libmachine: (addons-535714) DBG |     <apic/>
	I1002 06:57:13.702215  566681 main.go:141] libmachine: (addons-535714) DBG |     <pae/>
	I1002 06:57:13.702220  566681 main.go:141] libmachine: (addons-535714) DBG |   </features>
	I1002 06:57:13.702241  566681 main.go:141] libmachine: (addons-535714) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 06:57:13.702256  566681 main.go:141] libmachine: (addons-535714) DBG |   <clock offset='utc'/>
	I1002 06:57:13.702265  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 06:57:13.702283  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_reboot>restart</on_reboot>
	I1002 06:57:13.702295  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_crash>destroy</on_crash>
	I1002 06:57:13.702305  566681 main.go:141] libmachine: (addons-535714) DBG |   <devices>
	I1002 06:57:13.702317  566681 main.go:141] libmachine: (addons-535714) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 06:57:13.702328  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='cdrom'>
	I1002 06:57:13.702340  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw'/>
	I1002 06:57:13.702352  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.702364  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.702375  566681 main.go:141] libmachine: (addons-535714) DBG |       <readonly/>
	I1002 06:57:13.702387  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 06:57:13.702398  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702419  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='disk'>
	I1002 06:57:13.702432  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 06:57:13.702451  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.702462  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.702472  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 06:57:13.702482  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702490  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 06:57:13.702503  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 06:57:13.702512  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702521  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 06:57:13.702535  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 06:57:13.702589  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 06:57:13.702612  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702624  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702630  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:00:74:bc'/>
	I1002 06:57:13.702639  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='mk-addons-535714'/>
	I1002 06:57:13.702646  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702658  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 06:57:13.702665  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702675  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702687  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:ff:9b:2c'/>
	I1002 06:57:13.702697  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='default'/>
	I1002 06:57:13.702707  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702719  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 06:57:13.702730  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702740  566681 main.go:141] libmachine: (addons-535714) DBG |     <serial type='pty'>
	I1002 06:57:13.702751  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='isa-serial' port='0'>
	I1002 06:57:13.702765  566681 main.go:141] libmachine: (addons-535714) DBG |         <model name='isa-serial'/>
	I1002 06:57:13.702775  566681 main.go:141] libmachine: (addons-535714) DBG |       </target>
	I1002 06:57:13.702784  566681 main.go:141] libmachine: (addons-535714) DBG |     </serial>
	I1002 06:57:13.702806  566681 main.go:141] libmachine: (addons-535714) DBG |     <console type='pty'>
	I1002 06:57:13.702820  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='serial' port='0'/>
	I1002 06:57:13.702827  566681 main.go:141] libmachine: (addons-535714) DBG |     </console>
	I1002 06:57:13.702839  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='mouse' bus='ps2'/>
	I1002 06:57:13.702850  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 06:57:13.702861  566681 main.go:141] libmachine: (addons-535714) DBG |     <audio id='1' type='none'/>
	I1002 06:57:13.702881  566681 main.go:141] libmachine: (addons-535714) DBG |     <memballoon model='virtio'>
	I1002 06:57:13.702895  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 06:57:13.702901  566681 main.go:141] libmachine: (addons-535714) DBG |     </memballoon>
	I1002 06:57:13.702910  566681 main.go:141] libmachine: (addons-535714) DBG |     <rng model='virtio'>
	I1002 06:57:13.702918  566681 main.go:141] libmachine: (addons-535714) DBG |       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.702929  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 06:57:13.702944  566681 main.go:141] libmachine: (addons-535714) DBG |     </rng>
	I1002 06:57:13.702957  566681 main.go:141] libmachine: (addons-535714) DBG |   </devices>
	I1002 06:57:13.702972  566681 main.go:141] libmachine: (addons-535714) DBG | </domain>
	I1002 06:57:13.702987  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:14.963247  566681 main.go:141] libmachine: (addons-535714) waiting for domain to start...
	I1002 06:57:14.964664  566681 main.go:141] libmachine: (addons-535714) domain is now running
	I1002 06:57:14.964695  566681 main.go:141] libmachine: (addons-535714) waiting for IP...
	I1002 06:57:14.965420  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:14.966032  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:14.966060  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:14.966362  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:14.966431  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:14.966367  566709 retry.go:31] will retry after 210.201926ms: waiting for domain to come up
	I1002 06:57:15.178058  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.178797  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.178832  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.179051  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.179089  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.179030  566709 retry.go:31] will retry after 312.318729ms: waiting for domain to come up
	I1002 06:57:15.493036  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.493844  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.493865  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.494158  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.494260  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.494172  566709 retry.go:31] will retry after 379.144998ms: waiting for domain to come up
	I1002 06:57:15.874866  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.875597  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.875618  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.875940  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.875972  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.875891  566709 retry.go:31] will retry after 392.719807ms: waiting for domain to come up
	I1002 06:57:16.270678  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.271369  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.271417  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.271795  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.271822  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.271752  566709 retry.go:31] will retry after 502.852746ms: waiting for domain to come up
	I1002 06:57:16.776382  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.777033  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.777083  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.777418  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.777452  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.777390  566709 retry.go:31] will retry after 817.041708ms: waiting for domain to come up
	I1002 06:57:17.596403  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:17.597002  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:17.597037  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:17.597304  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:17.597337  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:17.597286  566709 retry.go:31] will retry after 1.129250566s: waiting for domain to come up
	I1002 06:57:18.728727  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:18.729410  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:18.729438  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:18.729739  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:18.729770  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:18.729716  566709 retry.go:31] will retry after 1.486801145s: waiting for domain to come up
	I1002 06:57:20.218801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:20.219514  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:20.219546  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:20.219811  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:20.219864  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:20.219802  566709 retry.go:31] will retry after 1.676409542s: waiting for domain to come up
	I1002 06:57:21.898812  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:21.899513  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:21.899536  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:21.899819  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:21.899877  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:21.899808  566709 retry.go:31] will retry after 1.43578276s: waiting for domain to come up
	I1002 06:57:23.337598  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:23.338214  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:23.338235  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:23.338569  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:23.338642  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:23.338553  566709 retry.go:31] will retry after 2.182622976s: waiting for domain to come up
	I1002 06:57:25.524305  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:25.524996  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:25.525030  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:25.525352  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:25.525383  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:25.525329  566709 retry.go:31] will retry after 2.567637867s: waiting for domain to come up
	I1002 06:57:28.094839  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:28.095351  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:28.095371  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:28.095666  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:28.095696  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:28.095635  566709 retry.go:31] will retry after 3.838879921s: waiting for domain to come up
	I1002 06:57:31.938799  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939560  566681 main.go:141] libmachine: (addons-535714) found domain IP: 192.168.39.164
	I1002 06:57:31.939593  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has current primary IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939601  566681 main.go:141] libmachine: (addons-535714) reserving static IP address...
	I1002 06:57:31.940101  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find host DHCP lease matching {name: "addons-535714", mac: "52:54:00:00:74:bc", ip: "192.168.39.164"} in network mk-addons-535714
	I1002 06:57:32.153010  566681 main.go:141] libmachine: (addons-535714) DBG | Getting to WaitForSSH function...
	I1002 06:57:32.153043  566681 main.go:141] libmachine: (addons-535714) reserved static IP address 192.168.39.164 for domain addons-535714
	I1002 06:57:32.153056  566681 main.go:141] libmachine: (addons-535714) waiting for SSH...
	I1002 06:57:32.156675  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157263  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.157288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157522  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH client type: external
	I1002 06:57:32.157548  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH private key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa (-rw-------)
	I1002 06:57:32.157582  566681 main.go:141] libmachine: (addons-535714) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 06:57:32.157609  566681 main.go:141] libmachine: (addons-535714) DBG | About to run SSH command:
	I1002 06:57:32.157620  566681 main.go:141] libmachine: (addons-535714) DBG | exit 0
	I1002 06:57:32.286418  566681 main.go:141] libmachine: (addons-535714) DBG | SSH cmd err, output: <nil>: 
	I1002 06:57:32.286733  566681 main.go:141] libmachine: (addons-535714) domain creation complete
	I1002 06:57:32.287044  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:32.287640  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288020  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288207  566681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 06:57:32.288223  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:32.289782  566681 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 06:57:32.289795  566681 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 06:57:32.289800  566681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 06:57:32.289805  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.292433  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.292851  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.292897  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.293050  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.293317  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293481  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293658  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.293813  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.294063  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.294076  566681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 06:57:32.392654  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.392681  566681 main.go:141] libmachine: Detecting the provisioner...
	I1002 06:57:32.392690  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.396029  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396454  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.396486  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396681  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.396903  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397079  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397260  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.397412  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.397680  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.397696  566681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 06:57:32.501992  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 06:57:32.502093  566681 main.go:141] libmachine: found compatible host: buildroot
	I1002 06:57:32.502117  566681 main.go:141] libmachine: Provisioning with buildroot...
	I1002 06:57:32.502131  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502439  566681 buildroot.go:166] provisioning hostname "addons-535714"
	I1002 06:57:32.502476  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502701  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.506170  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506653  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.506716  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506786  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.507040  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507252  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507426  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.507729  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.507997  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.508013  566681 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-535714 && echo "addons-535714" | sudo tee /etc/hostname
	I1002 06:57:32.632360  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-535714
	
	I1002 06:57:32.632404  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.635804  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636293  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.636319  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636574  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.636804  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.636969  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.637110  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.637297  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.637584  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.637613  566681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-535714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-535714/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-535714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:57:32.752063  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.752119  566681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21643-562157/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-562157/.minikube}
	I1002 06:57:32.752193  566681 buildroot.go:174] setting up certificates
	I1002 06:57:32.752210  566681 provision.go:84] configureAuth start
	I1002 06:57:32.752256  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.752721  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:32.756026  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756514  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.756545  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756704  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.759506  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.759945  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.759972  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.760113  566681 provision.go:143] copyHostCerts
	I1002 06:57:32.760210  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem (1123 bytes)
	I1002 06:57:32.760331  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem (1675 bytes)
	I1002 06:57:32.760392  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem (1078 bytes)
	I1002 06:57:32.760440  566681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem org=jenkins.addons-535714 san=[127.0.0.1 192.168.39.164 addons-535714 localhost minikube]
	I1002 06:57:32.997259  566681 provision.go:177] copyRemoteCerts
	I1002 06:57:32.997339  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:57:32.997365  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.001746  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002246  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.002275  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002606  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.002841  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.003067  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.003261  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.087811  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:57:33.120074  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:57:33.152344  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:57:33.183560  566681 provision.go:87] duration metric: took 431.305231ms to configureAuth
	I1002 06:57:33.183592  566681 buildroot.go:189] setting minikube options for container-runtime
	I1002 06:57:33.183785  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:33.183901  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.187438  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.187801  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.187825  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.188034  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.188285  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188508  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188682  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.188927  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.189221  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.189246  566681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:57:33.455871  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:57:33.455896  566681 main.go:141] libmachine: Checking connection to Docker...
	I1002 06:57:33.455904  566681 main.go:141] libmachine: (addons-535714) Calling .GetURL
	I1002 06:57:33.457296  566681 main.go:141] libmachine: (addons-535714) DBG | using libvirt version 8000000
	I1002 06:57:33.460125  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460550  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.460582  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460738  566681 main.go:141] libmachine: Docker is up and running!
	I1002 06:57:33.460770  566681 main.go:141] libmachine: Reticulating splines...
	I1002 06:57:33.460780  566681 client.go:171] duration metric: took 20.753318284s to LocalClient.Create
	I1002 06:57:33.460805  566681 start.go:167] duration metric: took 20.753406484s to libmachine.API.Create "addons-535714"
	I1002 06:57:33.460815  566681 start.go:293] postStartSetup for "addons-535714" (driver="kvm2")
	I1002 06:57:33.460824  566681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:57:33.460841  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.461104  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:57:33.461149  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.463666  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464001  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.464024  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.464486  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.464662  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.464805  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.547032  566681 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:57:33.552379  566681 info.go:137] Remote host: Buildroot 2025.02
	I1002 06:57:33.552408  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/addons for local assets ...
	I1002 06:57:33.552489  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/files for local assets ...
	I1002 06:57:33.552524  566681 start.go:296] duration metric: took 91.702797ms for postStartSetup
	I1002 06:57:33.552573  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:33.553229  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.556294  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556659  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.556691  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556979  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:33.557200  566681 start.go:128] duration metric: took 20.867433906s to createHost
	I1002 06:57:33.557235  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.559569  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.559976  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.560033  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.560209  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.560387  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560524  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560647  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.560782  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.561006  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.561024  566681 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 06:57:33.663941  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759388253.625480282
	
	I1002 06:57:33.663966  566681 fix.go:216] guest clock: 1759388253.625480282
	I1002 06:57:33.663974  566681 fix.go:229] Guest: 2025-10-02 06:57:33.625480282 +0000 UTC Remote: 2025-10-02 06:57:33.557215192 +0000 UTC m=+20.980868887 (delta=68.26509ms)
	I1002 06:57:33.664010  566681 fix.go:200] guest clock delta is within tolerance: 68.26509ms
	I1002 06:57:33.664022  566681 start.go:83] releasing machines lock for "addons-535714", held for 20.974372731s
	I1002 06:57:33.664050  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.664374  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.667827  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668310  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.668344  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668518  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669079  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669275  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669418  566681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:57:33.669466  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.669473  566681 ssh_runner.go:195] Run: cat /version.json
	I1002 06:57:33.669492  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.672964  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673168  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673457  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673495  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673642  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673878  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674001  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.674093  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674177  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674268  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.674352  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674502  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.752747  566681 ssh_runner.go:195] Run: systemctl --version
	I1002 06:57:33.777712  566681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:57:33.941402  566681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:57:33.949414  566681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:57:33.949490  566681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:57:33.971089  566681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:57:33.971121  566681 start.go:495] detecting cgroup driver to use...
	I1002 06:57:33.971215  566681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:57:33.990997  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:57:34.009642  566681 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:57:34.009719  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:57:34.028675  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:57:34.045011  566681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:57:34.191090  566681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:57:34.404836  566681 docker.go:234] disabling docker service ...
	I1002 06:57:34.404915  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:57:34.421846  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:57:34.437815  566681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:57:34.593256  566681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:57:34.739807  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:57:34.755656  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:57:34.780318  566681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:57:34.780381  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.794344  566681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 06:57:34.794437  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.807921  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.821174  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.834265  566681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:57:34.848039  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.861013  566681 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.882928  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.895874  566681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:57:34.906834  566681 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 06:57:34.906902  566681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 06:57:34.930283  566681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:57:34.944196  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:35.086744  566681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:57:35.203118  566681 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:57:35.203247  566681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:57:35.208872  566681 start.go:563] Will wait 60s for crictl version
	I1002 06:57:35.208951  566681 ssh_runner.go:195] Run: which crictl
	I1002 06:57:35.213165  566681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 06:57:35.254690  566681 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 06:57:35.254809  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.285339  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.318360  566681 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 06:57:35.319680  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:35.322840  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323187  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:35.323215  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323541  566681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 06:57:35.328294  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:35.344278  566681 kubeadm.go:883] updating cluster {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:57:35.344381  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:35.344426  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:35.382419  566681 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 06:57:35.382487  566681 ssh_runner.go:195] Run: which lz4
	I1002 06:57:35.386980  566681 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 06:57:35.392427  566681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 06:57:35.392457  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1002 06:57:36.901929  566681 crio.go:462] duration metric: took 1.514994717s to copy over tarball
	I1002 06:57:36.902020  566681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 06:57:38.487982  566681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.585912508s)
	I1002 06:57:38.488018  566681 crio.go:469] duration metric: took 1.586055344s to extract the tarball
	I1002 06:57:38.488028  566681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 06:57:38.530041  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:38.574743  566681 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:57:38.574771  566681 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:57:38.574780  566681 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.34.1 crio true true} ...
	I1002 06:57:38.574907  566681 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-535714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:57:38.574982  566681 ssh_runner.go:195] Run: crio config
	I1002 06:57:38.626077  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:38.626100  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:38.626114  566681 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:57:38.626157  566681 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-535714 NodeName:addons-535714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:57:38.626290  566681 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-535714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:57:38.626379  566681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:57:38.638875  566681 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:57:38.638942  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:57:38.650923  566681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1002 06:57:38.672765  566681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:57:38.695198  566681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 06:57:38.716738  566681 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I1002 06:57:38.721153  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:38.736469  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:38.882003  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:38.903662  566681 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714 for IP: 192.168.39.164
	I1002 06:57:38.903695  566681 certs.go:195] generating shared ca certs ...
	I1002 06:57:38.903722  566681 certs.go:227] acquiring lock for ca certs: {Name:mk8e87648e070d331709ecc08a93a441c20cc0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.903919  566681 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key
	I1002 06:57:38.961629  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt ...
	I1002 06:57:38.961659  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt: {Name:mkce3dd067e2e7843e2a288d28dbaf57f057aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961829  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key ...
	I1002 06:57:38.961841  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key: {Name:mka327360c05168b3164194068242bb15d511ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961939  566681 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key
	I1002 06:57:39.050167  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt ...
	I1002 06:57:39.050199  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt: {Name:mkf18fa19ddf5ebcd4669a9a2e369e414c03725b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050375  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key ...
	I1002 06:57:39.050388  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key: {Name:mk774f61354e64c5344d2d0d059164fff9076c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050460  566681 certs.go:257] generating profile certs ...
	I1002 06:57:39.050516  566681 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key
	I1002 06:57:39.050537  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt with IP's: []
	I1002 06:57:39.147298  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt ...
	I1002 06:57:39.147330  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: {Name:mk17b498d515b2f43666faa03b17d7223c9a8157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147495  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key ...
	I1002 06:57:39.147505  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key: {Name:mke1e8140b8916f87dd85d98abe8a51503f6e4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147578  566681 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed
	I1002 06:57:39.147597  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164]
	I1002 06:57:39.310236  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed ...
	I1002 06:57:39.310266  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed: {Name:mk247c08955d8ed7427926c7244db21ffe837768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310428  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed ...
	I1002 06:57:39.310441  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed: {Name:mkc3fa16c2fd82a07eac700fa655e28a42c60f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310525  566681 certs.go:382] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt
	I1002 06:57:39.310624  566681 certs.go:386] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key
	I1002 06:57:39.310682  566681 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key
	I1002 06:57:39.310701  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt with IP's: []
	I1002 06:57:39.497350  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt ...
	I1002 06:57:39.497386  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt: {Name:mk4f28529f4cee1ff8311028b7bb7fc35a77bba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497555  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key ...
	I1002 06:57:39.497569  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key: {Name:mkfac0b0a329edb8634114371202cb4ba011c129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497750  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:57:39.497784  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:57:39.497808  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:57:39.497835  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem (1675 bytes)
	I1002 06:57:39.498475  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:57:39.530649  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:57:39.561340  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:57:39.593844  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:57:39.629628  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:57:39.668367  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:57:39.699924  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:57:39.730177  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:57:39.761107  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:57:39.791592  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:57:39.813294  566681 ssh_runner.go:195] Run: openssl version
	I1002 06:57:39.820587  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:57:39.834664  566681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840283  566681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:57 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840348  566681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.848412  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:57:39.863027  566681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:57:39.868269  566681 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:57:39.868325  566681 kubeadm.go:400] StartCluster: {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:57:39.868408  566681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:57:39.868500  566681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:57:39.910571  566681 cri.go:89] found id: ""
	I1002 06:57:39.910645  566681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:57:39.923825  566681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:57:39.936522  566681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:57:39.949191  566681 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:57:39.949214  566681 kubeadm.go:157] found existing configuration files:
	
	I1002 06:57:39.949292  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:57:39.961561  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:57:39.961637  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:57:39.974337  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:57:39.986029  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:57:39.986104  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:57:39.997992  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.008894  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:57:40.008966  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.021235  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:57:40.032694  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:57:40.032754  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:57:40.045554  566681 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 06:57:40.211362  566681 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:57:51.799597  566681 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:57:51.799689  566681 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:57:51.799798  566681 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:57:51.799950  566681 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:57:51.800082  566681 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:57:51.800206  566681 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:57:51.802349  566681 out.go:252]   - Generating certificates and keys ...
	I1002 06:57:51.802439  566681 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:57:51.802492  566681 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:57:51.802586  566681 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:57:51.802729  566681 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:57:51.802823  566681 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:57:51.802894  566681 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:57:51.802944  566681 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:57:51.803058  566681 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803125  566681 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:57:51.803276  566681 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803350  566681 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:57:51.803420  566681 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:57:51.803491  566681 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:57:51.803557  566681 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:57:51.803634  566681 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:57:51.803717  566681 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:57:51.803807  566681 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:57:51.803899  566681 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:57:51.803950  566681 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:57:51.804029  566681 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:57:51.804088  566681 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:57:51.805702  566681 out.go:252]   - Booting up control plane ...
	I1002 06:57:51.805781  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:57:51.805846  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:57:51.805929  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:57:51.806028  566681 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:57:51.806148  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:57:51.806260  566681 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:57:51.806361  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:57:51.806420  566681 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:57:51.806575  566681 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:57:51.806669  566681 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:57:51.806717  566681 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.672587ms
	I1002 06:57:51.806806  566681 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:57:51.806892  566681 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.164:8443/livez
	I1002 06:57:51.806963  566681 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:57:51.807067  566681 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:57:51.807185  566681 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.362189492s
	I1002 06:57:51.807284  566681 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.802664802s
	I1002 06:57:51.807338  566681 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003805488s
	I1002 06:57:51.807453  566681 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:57:51.807587  566681 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:57:51.807642  566681 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:57:51.807816  566681 kubeadm.go:318] [mark-control-plane] Marking the node addons-535714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:57:51.807890  566681 kubeadm.go:318] [bootstrap-token] Using token: 7tuk3k.1448ee54qv9op8vd
	I1002 06:57:51.810266  566681 out.go:252]   - Configuring RBAC rules ...
	I1002 06:57:51.810355  566681 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:57:51.810443  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:57:51.810582  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:57:51.810746  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:57:51.810922  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:57:51.811039  566681 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:57:51.811131  566681 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:57:51.811203  566681 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:57:51.811259  566681 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:57:51.811271  566681 kubeadm.go:318] 
	I1002 06:57:51.811321  566681 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:57:51.811327  566681 kubeadm.go:318] 
	I1002 06:57:51.811408  566681 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:57:51.811416  566681 kubeadm.go:318] 
	I1002 06:57:51.811438  566681 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:57:51.811524  566681 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:57:51.811568  566681 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:57:51.811574  566681 kubeadm.go:318] 
	I1002 06:57:51.811638  566681 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:57:51.811650  566681 kubeadm.go:318] 
	I1002 06:57:51.811704  566681 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:57:51.811711  566681 kubeadm.go:318] 
	I1002 06:57:51.811751  566681 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:57:51.811811  566681 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:57:51.811912  566681 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:57:51.811926  566681 kubeadm.go:318] 
	I1002 06:57:51.812042  566681 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:57:51.812153  566681 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:57:51.812165  566681 kubeadm.go:318] 
	I1002 06:57:51.812280  566681 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812417  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 \
	I1002 06:57:51.812453  566681 kubeadm.go:318] 	--control-plane 
	I1002 06:57:51.812464  566681 kubeadm.go:318] 
	I1002 06:57:51.812595  566681 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:57:51.812615  566681 kubeadm.go:318] 
	I1002 06:57:51.812711  566681 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812863  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 
	I1002 06:57:51.812931  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:51.812944  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:51.815686  566681 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 06:57:51.817060  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 06:57:51.834402  566681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 06:57:51.858951  566681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:57:51.859117  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:51.859124  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-535714 minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-535714 minikube.k8s.io/primary=true
	I1002 06:57:51.921378  566681 ops.go:34] apiserver oom_adj: -16
	I1002 06:57:52.030323  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:52.531214  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.031113  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.531050  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.030867  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.531128  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.030521  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.530702  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.030762  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.196068  566681 kubeadm.go:1113] duration metric: took 4.337043927s to wait for elevateKubeSystemPrivileges
	I1002 06:57:56.196100  566681 kubeadm.go:402] duration metric: took 16.3277794s to StartCluster
	I1002 06:57:56.196121  566681 settings.go:142] acquiring lock: {Name:mkde88de9cc28e670cb4891970fce50579712197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.196294  566681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:56.196768  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/kubeconfig: {Name:mkaba69145ae0ebd7ee7f396e649d41ddd82691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.197012  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:57:56.197039  566681 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:56.197157  566681 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:57:56.197305  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.197326  566681 addons.go:69] Setting ingress=true in profile "addons-535714"
	I1002 06:57:56.197323  566681 addons.go:69] Setting default-storageclass=true in profile "addons-535714"
	I1002 06:57:56.197353  566681 addons.go:238] Setting addon ingress=true in "addons-535714"
	I1002 06:57:56.197360  566681 addons.go:69] Setting registry=true in profile "addons-535714"
	I1002 06:57:56.197367  566681 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-535714"
	I1002 06:57:56.197376  566681 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-535714"
	I1002 06:57:56.197382  566681 addons.go:69] Setting volumesnapshots=true in profile "addons-535714"
	I1002 06:57:56.197391  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-535714"
	I1002 06:57:56.197393  566681 addons.go:69] Setting ingress-dns=true in profile "addons-535714"
	I1002 06:57:56.197397  566681 addons.go:238] Setting addon volumesnapshots=true in "addons-535714"
	I1002 06:57:56.197403  566681 addons.go:238] Setting addon ingress-dns=true in "addons-535714"
	I1002 06:57:56.197413  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197417  566681 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:57:56.197432  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197438  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197454  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197317  566681 addons.go:69] Setting gcp-auth=true in profile "addons-535714"
	I1002 06:57:56.197804  566681 addons.go:69] Setting metrics-server=true in profile "addons-535714"
	I1002 06:57:56.197813  566681 mustload.go:65] Loading cluster: addons-535714
	I1002 06:57:56.197822  566681 addons.go:238] Setting addon metrics-server=true in "addons-535714"
	I1002 06:57:56.197849  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197953  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197985  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197348  566681 addons.go:69] Setting cloud-spanner=true in profile "addons-535714"
	I1002 06:57:56.197995  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198002  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.198025  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198027  566681 addons.go:69] Setting inspektor-gadget=true in profile "addons-535714"
	I1002 06:57:56.198034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198040  566681 addons.go:238] Setting addon inspektor-gadget=true in "addons-535714"
	I1002 06:57:56.198051  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198062  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198075  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198080  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198105  566681 addons.go:69] Setting volcano=true in profile "addons-535714"
	I1002 06:57:56.198115  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198118  566681 addons.go:238] Setting addon volcano=true in "addons-535714"
	I1002 06:57:56.198121  566681 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198148  566681 addons.go:69] Setting registry-creds=true in profile "addons-535714"
	I1002 06:57:56.198149  566681 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-535714"
	I1002 06:57:56.198007  566681 addons.go:238] Setting addon cloud-spanner=true in "addons-535714"
	I1002 06:57:56.197369  566681 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198159  566681 addons.go:238] Setting addon registry-creds=true in "addons-535714"
	I1002 06:57:56.197383  566681 addons.go:238] Setting addon registry=true in "addons-535714"
	I1002 06:57:56.198168  566681 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-535714"
	I1002 06:57:56.197305  566681 addons.go:69] Setting yakd=true in profile "addons-535714"
	I1002 06:57:56.198174  566681 addons.go:69] Setting storage-provisioner=true in profile "addons-535714"
	I1002 06:57:56.198182  566681 addons.go:238] Setting addon yakd=true in "addons-535714"
	I1002 06:57:56.198188  566681 addons.go:238] Setting addon storage-provisioner=true in "addons-535714"
	I1002 06:57:56.197356  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-535714"
	I1002 06:57:56.197990  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198337  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198362  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198371  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198392  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198402  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198453  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198563  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198685  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198716  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198796  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198823  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198872  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198882  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198903  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.199225  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199278  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199496  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199602  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199605  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.199635  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.200717  566681 out.go:179] * Verifying Kubernetes components...
	I1002 06:57:56.203661  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:56.205590  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205627  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.205734  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205767  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.207434  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.207479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.210405  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.210443  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.213438  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.213479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.214017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.214056  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.232071  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1002 06:57:56.233110  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.234209  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.234234  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.234937  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.236013  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.236165  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.237450  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I1002 06:57:56.239323  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I1002 06:57:56.239414  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I1002 06:57:56.240034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240196  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240748  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.240776  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.240868  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I1002 06:57:56.240881  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.241379  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.241396  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.241535  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.242519  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.242540  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.242696  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.242735  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.242850  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.243325  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1002 06:57:56.243893  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.243945  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.244617  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.244654  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.245057  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.245890  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.245907  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.246010  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I1002 06:57:56.246033  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I1002 06:57:56.246568  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.247024  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.247099  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.247133  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.247421  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I1002 06:57:56.247710  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.247729  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.248188  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.248445  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.249846  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.250467  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.251029  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251054  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.251579  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251601  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.252078  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.252654  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.252734  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.255593  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.255986  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.256022  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.257178  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.257900  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.257951  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.258275  566681 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-535714"
	I1002 06:57:56.259770  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.259874  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.260317  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.260360  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.260738  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.260770  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.261307  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.261989  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.262034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.263359  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I1002 06:57:56.263562  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I1002 06:57:56.264010  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.264539  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.264559  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.265015  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.265220  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.268199  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38901
	I1002 06:57:56.268835  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.269385  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.269407  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.269800  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.272103  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.272173  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.272820  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.274630  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1002 06:57:56.275810  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I1002 06:57:56.275999  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I1002 06:57:56.276099  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I1002 06:57:56.276317  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I1002 06:57:56.276957  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277804  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.277826  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.277935  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277992  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.279294  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.279318  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.279418  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279522  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I1002 06:57:56.279526  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.280424  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280801  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280956  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:57:56.280961  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281067  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281080  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281248  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281259  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281396  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.280977  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281804  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.281870  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.282274  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282869  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.282901  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282927  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.282975  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.283442  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.284009  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.284202  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.284751  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:56.284768  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:57:56.284787  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.284857  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.284890  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.285017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.285054  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.288207  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.289274  566681 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:57:56.289290  566681 addons.go:238] Setting addon default-storageclass=true in "addons-535714"
	I1002 06:57:56.289364  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.289753  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.289797  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290042  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1002 06:57:56.290151  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.290556  566681 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:56.290578  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.290579  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:57:56.290609  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290771  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290990  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.291089  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I1002 06:57:56.291362  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291376  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.291505  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291516  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.292055  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.293244  566681 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:57:56.294939  566681 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:56.294996  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:57:56.295277  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.296317  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.296363  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.296433  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I1002 06:57:56.297190  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.297368  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.300772  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.300866  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.300946  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.300966  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.300983  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301003  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301026  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.301076  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I1002 06:57:56.301165  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.301203  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.301228  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1002 06:57:56.301400  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.301411  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.301454  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.301467  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.303443  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.303720  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.303466  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.303491  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.303762  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.303770  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.303776  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303526  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.303632  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.304435  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.304932  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.305291  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.305345  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.305464  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.305492  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1002 06:57:56.305495  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.305508  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.305577  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.305592  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	W1002 06:57:56.305630  566681 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 06:57:56.306621  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307189  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.307311  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.307383  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.307409  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307505  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.307540  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.307955  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.307981  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.308071  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.308163  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.308587  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.309033  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309057  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309132  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309293  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.309302  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309314  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309372  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.309533  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309698  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309703  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.309839  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310208  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310523  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.311044  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.311749  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.313557  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.316426  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1002 06:57:56.319293  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I1002 06:57:56.319454  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.319564  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I1002 06:57:56.319675  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I1002 06:57:56.319683  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.319813  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.320386  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.320405  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.320695  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.320492  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.321204  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.321258  566681 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:57:56.321684  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.321443  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I1002 06:57:56.321593  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.321816  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.322144  566681 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:57:56.322156  566681 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:57:56.323037  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.323050  566681 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:56.323066  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:57:56.323087  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.323146  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.323323  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.323337  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.324564  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:56.324583  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:57:56.324603  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.324892  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325026  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.325041  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.325304  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1002 06:57:56.325602  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325730  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.325892  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326132  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326261  566681 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:57:56.327284  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.327472  566681 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:57:56.327597  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:57:56.327623  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.328569  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.328642  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.328661  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329119  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.329383  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.329634  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.329665  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329932  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.330003  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.331010  566681 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:57:56.331650  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332245  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:57:56.332277  566681 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:57:56.332261  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332297  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.332372  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.333369  566681 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:57:56.333621  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.333646  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.333810  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.334276  566681 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:57:56.334843  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.335194  566681 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:56.335210  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:57:56.335228  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.335446  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.335655  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I1002 06:57:56.335851  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.336132  566681 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:57:56.336170  566681 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:57:56.336280  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.336440  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1002 06:57:56.336618  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.337098  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.338250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.338315  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.338584  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.338676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.338709  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.338721  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.339313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.339382  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.339452  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.339507  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.340336  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.340677  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.340657  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341043  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.341796  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.341865  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342040  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342263  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342431  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342440  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342454  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342502  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342595  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342614  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342621  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342695  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.342072  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343379  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343750  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.343817  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.343832  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.344562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.344702  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344753  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.344946  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.345322  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.345404  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345404  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.345548  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.345606  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345806  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.346007  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.346320  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.346590  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.346862  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1002 06:57:56.347602  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.347914  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.348757  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.348800  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.349261  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.349633  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.349706  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.350337  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:57:56.351587  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:57:56.351643  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.351655  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:57:56.352903  566681 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:57:56.352987  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:57:56.353046  566681 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:57:56.353092  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.352987  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:57:56.353974  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I1002 06:57:56.354300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I1002 06:57:56.354530  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:57:56.354545  566681 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:57:56.354562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.354607  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355031  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355314  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355362  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355747  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.355869  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:57:56.355907  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355921  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355982  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.356446  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.356686  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.358485  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:57:56.359466  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.359801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360238  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360272  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.360643  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360654  566681 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:56.360667  566681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:57:56.360676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360684  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.360847  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.360902  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360949  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.361063  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361261  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.361264  566681 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:57:56.361278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.361264  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:57:56.361448  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.361531  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361713  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.362047  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.363668  566681 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:57:56.363670  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:57:56.364768  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365172  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:56.365189  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:57:56.365208  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.365463  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.365492  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365867  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.366200  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.366332  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:57:56.366394  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.366567  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.367647  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:57:56.367669  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:57:56.367689  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.369424  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370073  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.370181  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370353  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.370354  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1002 06:57:56.370539  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.370710  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.370855  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.371120  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.371862  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.371993  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.372440  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.372590  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.372646  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.373687  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.373711  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.373884  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.374060  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.374270  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.374438  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.374887  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.376513  566681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:57:56.377878  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:56.377895  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:57:56.377926  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.381301  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.381862  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.381898  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.382058  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.382245  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.382379  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.382525  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	W1002 06:57:56.611250  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611293  566681 retry.go:31] will retry after 268.923212ms: ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	W1002 06:57:56.611372  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611378  566681 retry.go:31] will retry after 284.79555ms: ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:57.238066  566681 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:57:57.238093  566681 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:57:57.274258  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:57.291447  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:57.296644  566681 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:57.296665  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:57:57.317724  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:57.326760  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:57:57.326790  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:57:57.344388  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:57.359635  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:57:57.359666  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:57:57.391219  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:57.397913  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:57.466213  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:57.539770  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:57:57.539800  566681 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:57:57.565073  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:57:57.565109  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:57:57.626622  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.42956155s)
	I1002 06:57:57.626664  566681 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.422968545s)
	I1002 06:57:57.626751  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:57.626829  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:57:57.788309  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:57:57.788340  566681 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:57:57.863163  566681 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:57.863190  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:57:57.896903  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:57:57.896955  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:57:57.923302  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:58.011690  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:58.012981  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:58.110306  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:57:58.110346  566681 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:57:58.142428  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:57:58.142456  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:57:58.216082  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:57:58.216112  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:57:58.218768  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:58.222643  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.222669  566681 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:57:58.429860  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:57:58.429897  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:57:58.485954  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:57:58.485995  566681 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:57:58.501916  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:57:58.501955  566681 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:57:58.521314  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.818318  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:57:58.818357  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:57:58.833980  566681 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:58.834010  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:57:58.873392  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:58.873431  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:57:59.176797  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:59.186761  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:57:59.186798  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:57:59.305759  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:59.719259  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:57:59.719285  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 06:58:00.188246  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:58:00.188281  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:58:00.481133  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.20682266s)
	I1002 06:58:00.481238  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.481605  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.481667  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.481693  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.481705  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481717  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.482053  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.482070  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.482081  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.644178  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:58:00.644209  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:58:01.086809  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:58:01.086834  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:58:01.452986  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:01.453026  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:58:02.150700  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:02.601667  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.310178549s)
	I1002 06:58:02.601725  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.28395893s)
	I1002 06:58:02.601734  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601747  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601765  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601795  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601869  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.25743101s)
	I1002 06:58:02.601905  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601924  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601917  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.210665802s)
	I1002 06:58:02.601951  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601961  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602046  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602055  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602062  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602178  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602365  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602381  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602379  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602385  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602399  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602401  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602410  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602351  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602416  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602424  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602390  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602460  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602330  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602552  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602560  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602566  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602767  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602847  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.603001  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603018  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602869  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602869  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603276  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:03.763895  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:58:03.763944  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:03.767733  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768302  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:03.768333  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768654  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:03.768868  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:03.769064  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:03.769213  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:04.277228  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:58:04.505226  566681 addons.go:238] Setting addon gcp-auth=true in "addons-535714"
	I1002 06:58:04.505305  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:58:04.505781  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.505848  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.521300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1002 06:58:04.521841  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.522464  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.522494  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.522889  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.523576  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.523636  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.537716  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1002 06:58:04.538258  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.538728  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.538756  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.539153  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.539385  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:58:04.541614  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:58:04.541849  566681 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:58:04.541880  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:04.545872  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546401  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:04.546429  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546708  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:04.546895  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:04.547027  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:04.547194  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:05.770941  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.372950609s)
	I1002 06:58:05.771023  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771039  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771065  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.304816797s)
	I1002 06:58:05.771113  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771131  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771178  566681 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.1443973s)
	I1002 06:58:05.771222  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.144363906s)
	I1002 06:58:05.771258  566681 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 06:58:05.771308  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.847977896s)
	W1002 06:58:05.771333  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771355  566681 retry.go:31] will retry after 297.892327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771456  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758443398s)
	I1002 06:58:05.771481  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771490  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771540  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.759815099s)
	I1002 06:58:05.771573  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771575  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.552784974s)
	I1002 06:58:05.771584  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771595  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771611  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771719  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.250362363s)
	I1002 06:58:05.771747  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771759  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771942  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.771963  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772013  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772022  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772032  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772040  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772044  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772052  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772059  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772194  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772224  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772230  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772248  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772485  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772523  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772532  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772541  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772549  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772589  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772628  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772636  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772645  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772653  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772709  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772796  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773193  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.773210  566681 addons.go:479] Verifying addon registry=true in "addons-535714"
	I1002 06:58:05.773744  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.773810  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773834  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774038  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.774118  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774129  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772818  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772841  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774925  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774937  566681 addons.go:479] Verifying addon ingress=true in "addons-535714"
	I1002 06:58:05.772862  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775004  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775017  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.775024  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772880  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775347  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775380  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775386  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775394  566681 addons.go:479] Verifying addon metrics-server=true in "addons-535714"
	I1002 06:58:05.776348  566681 node_ready.go:35] waiting up to 6m0s for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.776980  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.776996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.776998  566681 out.go:179] * Verifying registry addon...
	I1002 06:58:05.779968  566681 out.go:179] * Verifying ingress addon...
	I1002 06:58:05.780767  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:58:05.782010  566681 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:58:05.829095  566681 node_ready.go:49] node "addons-535714" is "Ready"
	I1002 06:58:05.829146  566681 node_ready.go:38] duration metric: took 52.75602ms for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.829168  566681 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:58:05.829233  566681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:58:05.834443  566681 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:58:05.834466  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:05.835080  566681 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:58:05.835100  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:05.875341  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.875368  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.875751  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.875763  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.875778  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:58:05.875878  566681 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 06:58:05.909868  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.909898  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.910207  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.910270  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.910287  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.069811  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:06.216033  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.039174172s)
	W1002 06:58:06.216104  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216108  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.910297192s)
	I1002 06:58:06.216150  566681 retry.go:31] will retry after 161.340324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216192  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216210  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216504  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216542  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216549  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.216557  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216563  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216800  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216843  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.218514  566681 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-535714 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:58:06.294875  566681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-535714" context rescaled to 1 replicas
	I1002 06:58:06.324438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:06.327459  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.377937  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:58:06.794270  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.798170  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.296006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.297921  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.825812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.825866  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.904551  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.753782282s)
	I1002 06:58:07.904616  566681 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.362740219s)
	I1002 06:58:07.904661  566681 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.075410022s)
	I1002 06:58:07.904685  566681 api_server.go:72] duration metric: took 11.707614799s to wait for apiserver process to appear ...
	I1002 06:58:07.904692  566681 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:58:07.904618  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.904714  566681 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I1002 06:58:07.904746  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905650  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.905668  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.905673  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:07.905682  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.905697  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905988  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.906010  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.906023  566681 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:58:07.917720  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:58:07.917721  566681 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:58:07.919394  566681 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:58:07.920319  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:58:07.920611  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:58:07.920631  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:58:07.923712  566681 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I1002 06:58:07.935689  566681 api_server.go:141] control plane version: v1.34.1
	I1002 06:58:07.935726  566681 api_server.go:131] duration metric: took 31.026039ms to wait for apiserver health ...
	I1002 06:58:07.935739  566681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:58:07.938642  566681 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:58:07.938662  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:07.962863  566681 system_pods.go:59] 20 kube-system pods found
	I1002 06:58:07.962924  566681 system_pods.go:61] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:07.962934  566681 system_pods.go:61] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962944  566681 system_pods.go:61] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962951  566681 system_pods.go:61] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:07.962955  566681 system_pods.go:61] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:07.962959  566681 system_pods.go:61] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending
	I1002 06:58:07.962962  566681 system_pods.go:61] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:07.962965  566681 system_pods.go:61] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:07.962968  566681 system_pods.go:61] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:07.962973  566681 system_pods.go:61] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:07.962979  566681 system_pods.go:61] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:07.962983  566681 system_pods.go:61] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:07.962988  566681 system_pods.go:61] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:07.962994  566681 system_pods.go:61] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:07.962999  566681 system_pods.go:61] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:07.963005  566681 system_pods.go:61] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:07.963017  566681 system_pods.go:61] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:07.963022  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963031  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963036  566681 system_pods.go:61] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:07.963048  566681 system_pods.go:74] duration metric: took 27.298515ms to wait for pod list to return data ...
	I1002 06:58:07.963061  566681 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:58:07.979696  566681 default_sa.go:45] found service account: "default"
	I1002 06:58:07.979723  566681 default_sa.go:55] duration metric: took 16.655591ms for default service account to be created ...
	I1002 06:58:07.979733  566681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:58:08.050371  566681 system_pods.go:86] 20 kube-system pods found
	I1002 06:58:08.050407  566681 system_pods.go:89] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:08.050415  566681 system_pods.go:89] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050424  566681 system_pods.go:89] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050430  566681 system_pods.go:89] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:08.050438  566681 system_pods.go:89] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:08.050443  566681 system_pods.go:89] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:58:08.050449  566681 system_pods.go:89] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:08.050456  566681 system_pods.go:89] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:08.050463  566681 system_pods.go:89] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:08.050472  566681 system_pods.go:89] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:08.050477  566681 system_pods.go:89] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:08.050485  566681 system_pods.go:89] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:08.050493  566681 system_pods.go:89] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:08.050504  566681 system_pods.go:89] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:08.050512  566681 system_pods.go:89] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:08.050523  566681 system_pods.go:89] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:08.050528  566681 system_pods.go:89] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:08.050537  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050542  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050551  566681 system_pods.go:89] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:08.050567  566681 system_pods.go:126] duration metric: took 70.827007ms to wait for k8s-apps to be running ...
	I1002 06:58:08.050583  566681 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:58:08.050638  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:08.169874  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:58:08.169907  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:58:08.289577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.292025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.296361  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.296391  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:58:08.432642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:08.459596  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.795545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.796983  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.947651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.295174  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.296291  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.426575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.794891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.794937  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.929559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.288382  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.293181  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.428326  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.511821  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.441960114s)
	W1002 06:58:10.511871  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511903  566681 retry.go:31] will retry after 394.105371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511999  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.133998235s)
	I1002 06:58:10.512065  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512084  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512009  566681 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.461351775s)
	I1002 06:58:10.512151  566681 system_svc.go:56] duration metric: took 2.461548607s WaitForService to wait for kubelet
	I1002 06:58:10.512170  566681 kubeadm.go:586] duration metric: took 14.315097833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:58:10.512195  566681 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:58:10.512421  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512436  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.512445  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512451  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512808  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512831  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.525421  566681 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 06:58:10.525467  566681 node_conditions.go:123] node cpu capacity is 2
	I1002 06:58:10.525483  566681 node_conditions.go:105] duration metric: took 13.282233ms to run NodePressure ...
	I1002 06:58:10.525500  566681 start.go:241] waiting for startup goroutines ...
	I1002 06:58:10.876948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.878962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.907099  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:10.933831  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.474178987s)
	I1002 06:58:10.933902  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.933917  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934327  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934351  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.934363  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.934372  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934718  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934741  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.936073  566681 addons.go:479] Verifying addon gcp-auth=true in "addons-535714"
	I1002 06:58:10.939294  566681 out.go:179] * Verifying gcp-auth addon...
	I1002 06:58:10.941498  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:58:10.967193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.967643  566681 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:58:10.967661  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.291995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.292859  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.426822  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.449596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.787220  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.790007  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.927177  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.946352  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.291330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.291893  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.412988  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.505843996s)
	W1002 06:58:12.413060  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.413088  566681 retry.go:31] will retry after 830.72209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.425033  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.449434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.790923  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.792837  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.929132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.949344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.244514  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:13.289311  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.291334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.429008  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.453075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.786448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.787372  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.926128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.944808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.290787  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.291973  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.426597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.446124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.495404  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.250841467s)
	W1002 06:58:14.495476  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.495515  566681 retry.go:31] will retry after 993.52867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.787133  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.787363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.925480  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.947120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.288745  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.290247  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.426491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.446707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.489998  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:15.790203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.790718  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.926338  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.947762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.288050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.294216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.426315  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.448623  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.749674  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.259622296s)
	W1002 06:58:16.749739  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.749766  566681 retry.go:31] will retry after 685.893269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.784937  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.789418  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.924303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.945254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.286582  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.289258  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.429493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.436551  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:17.446130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.789304  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.789354  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.927192  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.947272  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.287684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.287964  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.425334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.446542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.793984  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.357370737s)
	W1002 06:58:18.794035  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.794058  566681 retry.go:31] will retry after 1.769505645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.818834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.819319  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.926250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.946166  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.286120  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.287299  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.427368  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.446296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.788860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.790575  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.926266  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.946838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.285631  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.286287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.426458  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.448700  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.563743  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:20.784983  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.792452  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.928439  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.946213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.354534  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.355101  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.424438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.447780  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.787792  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.788239  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.926313  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.946909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.986148  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.422343909s)
	W1002 06:58:21.986215  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:21.986241  566681 retry.go:31] will retry after 1.591159568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:22.479105  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.490010  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.490062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:22.490154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.785438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.785505  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.924097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.945260  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.287691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.288324  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.424675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.444770  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.578011  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:23.942123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.948294  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.948453  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.950791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.287641  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.287755  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.427062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.445753  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.646106  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.068053257s)
	W1002 06:58:24.646165  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.646192  566681 retry.go:31] will retry after 2.605552754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.785021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.786706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.924880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.945307  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.293097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.295253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.426401  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.448785  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.786965  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.789832  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.926383  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.947419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.286815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.424942  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.444763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.788540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.788706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.945896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.252378  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:27.285347  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.286330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.426765  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.444675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.783930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.785939  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.925152  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.946794  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:27.992201  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:27.992240  566681 retry.go:31] will retry after 8.383284602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:28.292474  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.293236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.449878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:28.785825  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.786277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.930557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.944934  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.288741  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.289425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.425596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.448825  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.791293  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.791772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.925493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.947040  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.289093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.289274  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.429043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.445086  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.787343  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.788106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.925916  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.945578  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.287772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.288130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.424173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.444911  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.839251  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.839613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.924537  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.945244  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.285593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.287197  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.428173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.445646  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.790722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.792545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.924044  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.948465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.287477  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.287815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.426173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.445002  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.789091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.789248  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.926672  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.945340  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.287879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.291550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.424476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.446160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.790769  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.793072  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.924896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.945667  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.523723  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.524500  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.524737  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.525162  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.790230  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.791831  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.924241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.944951  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.289627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.289977  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.375684  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:36.425592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.451074  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.785903  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.787679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.925288  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.947999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.311635  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.311959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.426029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.446091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.636801  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.261070571s)
	W1002 06:58:37.636852  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.636877  566681 retry.go:31] will retry after 12.088306464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.784365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.786077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.924729  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.287422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.288052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.424776  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.446043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.787364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.788336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.929977  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.952669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.285777  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.286130  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.425664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.445359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.791043  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.792332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.927261  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.949133  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.297847  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.298155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.508411  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.508530  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.790869  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.791640  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.926541  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.946409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.284335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.288282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.425342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.445476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.786456  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.787369  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.925788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.285122  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.289954  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.427812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.448669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.789086  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.793784  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.981476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.983793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.287301  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.287653  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.425089  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.446115  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.788762  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.788804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.925841  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.946154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.291446  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.291561  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.424642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.445497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.784807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.785666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.924223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.945793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.287330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.288804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.425720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.445387  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.784761  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.787219  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.925198  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.945101  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.287324  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.287453  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.425817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.785000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.786016  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.924786  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.944720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.284615  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.286350  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.424772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.444696  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.784801  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.786247  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.924675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.945863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.285254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.286071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.424850  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.444546  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.784736  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.787062  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.924609  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.285611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.286827  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.424821  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.444716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.726164  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:49.787775  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.787812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.924332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.945915  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:50.285693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.287323  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.425093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:50.457717  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.457755  566681 retry.go:31] will retry after 14.401076568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.785374  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.786592  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.924494  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.946113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.285309  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.424519  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.446358  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.785764  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.787620  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.924671  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.945518  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.284608  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.286328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.426252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.444955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.785415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.786501  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.924360  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.945603  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.286059  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.286081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.426061  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.445434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.784563  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.787018  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.926712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.285670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.286270  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.425263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.445015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.783971  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.785518  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.924652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.944701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.284095  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.285982  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.425045  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.445159  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.784789  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.785811  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.925024  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.945670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.284935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.286230  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.424865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.444979  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.784010  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.785095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.925082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.945267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.285037  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.290841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.423992  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.444492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.785708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.786647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.944543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.284397  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.286589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.424263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.446278  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.784592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.786223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.925275  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.945639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.284167  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.286213  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.424554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.446331  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.786351  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.786532  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.924799  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.944552  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.284593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.286147  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.427708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.446640  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.783993  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.786195  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.925109  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.945645  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.284268  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.286567  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.425880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.444926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.784751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.786669  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.924082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.945409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.285484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.287955  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.424588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.445328  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.785933  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.786611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.924311  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.945554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.284664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.286758  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.424558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.445443  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.785718  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.786015  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.924950  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.945320  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.285692  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.287456  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.423909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.445028  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.784417  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.785847  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.859977  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:04.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.944867  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:05.286410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.424815  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.444725  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:59:05.565727  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.565775  566681 retry.go:31] will retry after 12.962063584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.784083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.785399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.924301  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.945548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.284341  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.285025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.424577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.445930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.785592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.785777  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.924651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.944548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.284807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.286980  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.424593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.445604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.785681  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.786565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.924412  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.945298  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.284890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.285768  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.424422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.446875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.784632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.786747  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.924452  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.284701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.286699  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.424832  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.445005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.785080  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.923720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.944468  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.285848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.285877  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.425574  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.445229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.785800  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.788069  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.924958  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.945132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.284817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.286986  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.424693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.444335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.786755  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.788412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.924402  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.944935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.285499  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.285734  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.424709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.445959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.785549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.788041  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.924691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.944292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.285683  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.424754  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.445585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.784745  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.786053  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.925403  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.945860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.285184  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.286959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.424804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.446097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.791558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.791556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.927542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.949956  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.284639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.286617  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.426580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.446175  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.784496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.787071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.925830  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.945618  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.286160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:16.287392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.424973  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.446497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.789545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.790116  566681 kapi.go:107] duration metric: took 1m11.009348953s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 06:59:16.925187  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.947267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.287647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.426165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:17.450844  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.786988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.928406  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.027597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.293020  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.429378  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.449227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.528488  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:18.796448  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.929553  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.946292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.288404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.429199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.452666  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.792639  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.864991  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.336449949s)
	W1002 06:59:19.865069  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:19.865160  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865179  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865566  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:59:19.865575  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:59:19.865834  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:59:19.865969  566681 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:59:19.924481  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.945058  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.286730  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.424767  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.445496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.787056  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.925303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.945594  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.285610  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.424114  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.445438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.786589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.924253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.944783  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.285375  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.424724  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.445811  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.828328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.929492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.945629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.286455  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.424116  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.444871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.785953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.924350  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.945321  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.286907  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.424613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.445706  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.786265  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.925165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.944432  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.286899  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.424337  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.445373  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.786646  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.924121  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.944695  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.286707  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.425250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.445323  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.786287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.945489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.286403  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.424957  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.445376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.786820  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.924170  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.945197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.424241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.445542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.786419  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.945740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.286366  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.425536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.445687  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.788123  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.944760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.285395  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.425015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.445071  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.786362  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.925693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.945540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.286268  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.424296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.446123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.786155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.926684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.945375  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.286413  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.424180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.444838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.786253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.925151  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.944944  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.288748  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.425620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.445650  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.786358  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.924738  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.944757  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.285092  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.424998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.445067  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.786516  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.924306  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.945543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.286428  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.423533  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.445039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.785517  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.944555  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.286468  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.424778  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.444808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.785451  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.924018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.287660  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.424005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.445419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.785743  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.924870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.944575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.286370  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.424689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.444639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.786644  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.928760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.945529  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.286055  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.425011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.787058  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.924829  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.944865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.285681  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.424212  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.445570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.786536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.924039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.945611  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.286872  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.425081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.445160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.785854  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.924803  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.945395  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.286806  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.424531  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.445213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.785794  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.924199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.946416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.287223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.425005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.445179  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.786152  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.945545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.286313  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.425004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.445925  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.786682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.944902  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.286167  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.424932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.444879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.785378  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.925864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.945123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.286422  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.424954  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.445018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.786489  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.924425  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.945064  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.286244  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.425181  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.445110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.785417  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.923870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.944712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.287782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.424751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.444542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.786556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.924410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.945514  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.286856  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.424634  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.444823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.786341  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.925249  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.945585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.287532  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.427364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.449565  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.926679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.947416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.289682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.428232  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.445465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.787537  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.926415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.945253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.285757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.424433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.448251  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.785971  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.928422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.946461  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.286536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.452271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.786128  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.926032  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.946426  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.287601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.424345  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.445705  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.787096  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.924759  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.946688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.290180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.519704  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.519891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.926689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.946557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.286054  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.425914  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.447300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.785957  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.924030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.949871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.291565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.428120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.526092  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.786283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.933203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.952823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.290757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:58.425788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:58.445898  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.785286  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.135410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.135484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.289658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.424763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.444901  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.789990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.927768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.950570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.288666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.424489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.444995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.785712  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.928193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.945797  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.289874  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.429342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.447102  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.787399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.924633  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.944955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.288296  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.432709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.448119  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.788304  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.936551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.950283  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.291180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.429826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.446896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.789649  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.930297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.285728  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.423878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.445021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.785989  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.926604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.946365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.289629  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.424560  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.446580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.786184  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.925038  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.286414  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.425072  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.445415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.786235  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.924932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.945108  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.286318  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.425639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.445791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.787192  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.945680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.286388  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.424699  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.445180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.786177  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.927180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.945006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.285412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.424690  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.444685  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.787988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.926782  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.944680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.286385  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.425422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.445890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.785391  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.925292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.946110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.286953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.424926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.445097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.785990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.925536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.945882  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.286095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.426218  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.445400  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.787180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.924959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.945605  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.424843  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.445297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.786034  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.927087  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.945676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.286216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.424888  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.444768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.785283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.925300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.945536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.287658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.424359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.445282  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.785834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.924384  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.945604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.286392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.424670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.445327  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.786482  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.924913  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.944676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.286962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.428554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.445872  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.787125  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.924730  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.945508  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.286528  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.426864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.786434  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.926688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.945265  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.286255  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.425491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.446113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.925826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.946549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.286336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.424707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.444772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.785404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.925678  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.945252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.425487  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.447029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.786550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.945389  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.288156  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.425586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.446602  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.787696  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.924004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.286521  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.424493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.446224  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.786604  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.925118  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.945482  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.424632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.445848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.785791  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.927001  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.944907  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.288049  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.424875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.444559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.786767  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.925226  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.945050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.285958  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.426083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.444740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.787052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.945062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.285717  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.424050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.444966  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.787841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.924740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.945492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.286484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.424236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.445504  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.786601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.924551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.945948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.288423  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.424871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.445286  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.786695  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.926223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.945407  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.286021  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.425588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.445469  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.786883  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.926085  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.945814  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.287360  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.424981  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.445361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.787680  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.924556  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.945363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.288077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.425366  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.447433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.847272  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.946629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.946982  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.285658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.424106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.445538  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.787044  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.927886  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.944580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.290469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.425444  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.448620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.789282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.930009  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.948721  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.287469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.432852  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.446652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.788507  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.930180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.954772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.293484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.435262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.449271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.788843  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.928945  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.288443  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.427657  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.447716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.787995  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.933694  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.946106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.287636  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.427229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.446000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.788221  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.925863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.944669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:39.286808  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:39.425719  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:39.446011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.005533  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.011858  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.013227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.289216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.429330  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.446597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.788887  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.934361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.949590  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.288436  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.426586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.446712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.790082  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.926762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.948030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.286904  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.428171  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.447262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.787879  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.928999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.947900  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.289340  566681 kapi.go:107] duration metric: took 2m37.507327929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 07:00:43.426593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.445627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.927030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.946124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.426277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.445511  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.928128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.945892  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.424940  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.445245  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.925479  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.948084  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:46.427998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:46.446348  566681 kapi.go:107] duration metric: took 2m35.504841728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 07:00:46.448361  566681 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-535714 cluster.
	I1002 07:00:46.449772  566681 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 07:00:46.451121  566681 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 07:00:46.925947  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.429007  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.927793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.430587  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.930344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.428197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.928448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:50.425299  566681 kapi.go:107] duration metric: took 2m42.504972928s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 07:00:50.428467  566681 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, metrics-server, storage-provisioner, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 07:00:50.429978  566681 addons.go:514] duration metric: took 2m54.232824958s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin amd-gpu-device-plugin registry-creds metrics-server storage-provisioner storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 07:00:50.430050  566681 start.go:246] waiting for cluster config update ...
	I1002 07:00:50.430076  566681 start.go:255] writing updated cluster config ...
	I1002 07:00:50.430525  566681 ssh_runner.go:195] Run: rm -f paused
	I1002 07:00:50.439887  566681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:50.446240  566681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.451545  566681 pod_ready.go:94] pod "coredns-66bc5c9577-w7hjm" is "Ready"
	I1002 07:00:50.451589  566681 pod_ready.go:86] duration metric: took 5.295665ms for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.454257  566681 pod_ready.go:83] waiting for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.459251  566681 pod_ready.go:94] pod "etcd-addons-535714" is "Ready"
	I1002 07:00:50.459291  566681 pod_ready.go:86] duration metric: took 4.998226ms for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.463385  566681 pod_ready.go:83] waiting for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.473863  566681 pod_ready.go:94] pod "kube-apiserver-addons-535714" is "Ready"
	I1002 07:00:50.473899  566681 pod_ready.go:86] duration metric: took 10.481477ms for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.478391  566681 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.845519  566681 pod_ready.go:94] pod "kube-controller-manager-addons-535714" is "Ready"
	I1002 07:00:50.845556  566681 pod_ready.go:86] duration metric: took 367.127625ms for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.046035  566681 pod_ready.go:83] waiting for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.445054  566681 pod_ready.go:94] pod "kube-proxy-z495t" is "Ready"
	I1002 07:00:51.445095  566681 pod_ready.go:86] duration metric: took 399.024039ms for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.644949  566681 pod_ready.go:83] waiting for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045721  566681 pod_ready.go:94] pod "kube-scheduler-addons-535714" is "Ready"
	I1002 07:00:52.045756  566681 pod_ready.go:86] duration metric: took 400.769133ms for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045769  566681 pod_ready.go:40] duration metric: took 1.605821704s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:52.107681  566681 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 07:00:52.109482  566681 out.go:179] * Done! kubectl is now configured to use "addons-535714" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.523827094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388921523798874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3f23db5-df88-4ea9-bd39-094be433c14f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.525441093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af489a10-5408-4b92-a57c-a56f5801113c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.525519756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af489a10-5408-4b92-a57c-a56f5801113c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.526207625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9
741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},I
mage:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283
bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af489a10-5408-4b92-a57c-a56f5801113c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.574467690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a80c6259-22fa-4d40-9439-4e1c4359b6e0 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.574584244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a80c6259-22fa-4d40-9439-4e1c4359b6e0 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.576554082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f6251b7-1dca-40b2-a72a-bac190ceef2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.578942811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388921578854927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f6251b7-1dca-40b2-a72a-bac190ceef2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.580008095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fde7771b-f6ef-4bb0-b479-46f1c67663a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.580168275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fde7771b-f6ef-4bb0-b479-46f1c67663a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.580721291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9
741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},I
mage:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283
bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fde7771b-f6ef-4bb0-b479-46f1c67663a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.618853569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2b4c317-47b6-48dc-a45e-a91037ee84c9 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.619230144Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2b4c317-47b6-48dc-a45e-a91037ee84c9 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.620571710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f152eb6d-41b1-4f30-8ba8-0fa64a0131d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.621708852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388921621683833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f152eb6d-41b1-4f30-8ba8-0fa64a0131d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.622460429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7572948-e50b-47a2-8d30-4d5616496cf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.622577654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7572948-e50b-47a2-8d30-4d5616496cf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.623487150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9
741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},I
mage:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283
bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7572948-e50b-47a2-8d30-4d5616496cf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.661786530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73a8ed63-1862-41ac-8670-36f12037af81 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.661983294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73a8ed63-1862-41ac-8670-36f12037af81 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.663441288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d658c2db-bb51-4d64-8fa2-5b61cb81c4bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.664587307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388921664558524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d658c2db-bb51-4d64-8fa2-5b61cb81c4bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.665441489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6006096e-b37d-49dd-a7ef-dad73fedeb42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.665516945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6006096e-b37d-49dd-a7ef-dad73fedeb42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:08:41 addons-535714 crio[827]: time="2025-10-02 07:08:41.665986407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9
741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.p
od.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac01
15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},I
mage:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283
bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6006096e-b37d-49dd-a7ef-dad73fedeb42 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	86667c9385b67       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   2fe38d26ed81e       busybox
	6e1593fcd2d1f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f4c3a8c0ea5c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	65d9fdba36a17       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   e2277305f110b       csi-hostpathplugin-8sjk8
	81f190fa89d8e       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             7 minutes ago       Running             controller                               0                   2c0a4b75d16bb       ingress-nginx-controller-9cc49f96f-jcwrw
	0683a8b55d03d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3456f5ab4e9db       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                8 minutes ago       Running             node-driver-registrar                    0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f6808e1f9304       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              8 minutes ago       Running             csi-resizer                              0                   dabf0b0e1eb70       csi-hostpath-resizer-0
	24139e6a7a8b1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             8 minutes ago       Running             csi-attacher                             0                   e2ed9baa384a5       csi-hostpath-attacher-0
	46de36d65127e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   8 minutes ago       Running             csi-external-health-monitor-controller   0                   e2277305f110b       csi-hostpathplugin-8sjk8
	98d5407fe4705       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   e2ad15837b991       snapshot-controller-7d9fbc56b8-g4hd4
	ea44a6e53635f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   bbec6993c46f7       snapshot-controller-7d9fbc56b8-knwl8
	2f84e33ebf14f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   8 minutes ago       Exited              patch                                    0                   45c7f94d02bfb       ingress-nginx-admission-patch-46z2n
	5ce0b3e6c8fef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   8 minutes ago       Exited              create                                   0                   13a0722f22fb7       ingress-nginx-admission-create-jsw7z
	d20e001ce5fa7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            8 minutes ago       Running             gadget                                   0                   53cbb87b563ff       gadget-2hn79
	c68a602009da4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               10 minutes ago      Running             minikube-ingress-dns                     0                   1239599eb3508       kube-ingress-dns-minikube
	0f29426982799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             10 minutes ago      Running             storage-provisioner                      0                   348af25e84579       storage-provisioner
	58aa192645e96       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     10 minutes ago      Running             amd-gpu-device-plugin                    0                   dba3c49629455       amd-gpu-device-plugin-f7qcs
	6e31cb36c4500       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             10 minutes ago      Running             coredns                                  0                   4fcabfc373e60       coredns-66bc5c9577-w7hjm
	fb130499febb3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             10 minutes ago      Running             kube-proxy                               0                   646600c8d86f7       kube-proxy-z495t
	466837c8cdfcc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             10 minutes ago      Running             etcd                                     0                   c7d4e0eb984a2       etcd-addons-535714
	da8295539fc0e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             10 minutes ago      Running             kube-scheduler                           0                   36d2846a22a84       kube-scheduler-addons-535714
	da58df3cad660       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             10 minutes ago      Running             kube-controller-manager                  0                   63f4cb9d3437a       kube-controller-manager-addons-535714
	deaf436584a26       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             10 minutes ago      Running             kube-apiserver                           0                   35f49d5f3b8fb       kube-apiserver-addons-535714
	
	
	==> coredns [6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb] <==
	[INFO] 10.244.0.7:35110 - 11487 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000105891s
	[INFO] 10.244.0.7:35110 - 31639 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000100284s
	[INFO] 10.244.0.7:35110 - 25746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080168s
	[INFO] 10.244.0.7:35110 - 43819 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000100728s
	[INFO] 10.244.0.7:35110 - 63816 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124028s
	[INFO] 10.244.0.7:35110 - 35022 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000129164s
	[INFO] 10.244.0.7:35110 - 28119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001725128s
	[INFO] 10.244.0.7:50584 - 36630 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148556s
	[INFO] 10.244.0.7:50584 - 36962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067971s
	[INFO] 10.244.0.7:37190 - 758 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052949s
	[INFO] 10.244.0.7:37190 - 1043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051809s
	[INFO] 10.244.0.7:37461 - 4143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057036s
	[INFO] 10.244.0.7:37461 - 4397 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049832s
	[INFO] 10.244.0.7:36180 - 39849 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111086s
	[INFO] 10.244.0.7:36180 - 40050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069757s
	[INFO] 10.244.0.23:54237 - 52266 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001020809s
	[INFO] 10.244.0.23:46188 - 47837 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000755825s
	[INFO] 10.244.0.23:50620 - 40298 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145474s
	[INFO] 10.244.0.23:46344 - 40921 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123896s
	[INFO] 10.244.0.23:50353 - 65439 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000272665s
	[INFO] 10.244.0.23:50633 - 23346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143762s
	[INFO] 10.244.0.23:52616 - 28857 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002777615s
	[INFO] 10.244.0.23:55533 - 44086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003112269s
	[INFO] 10.244.0.27:55844 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000811242s
	[INFO] 10.244.0.27:51921 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000498985s
	
	
	==> describe nodes <==
	Name:               addons-535714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-535714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-535714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-535714
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-535714"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:57:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-535714
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    addons-535714
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 26ed18e3cae343e2ba2a85be4a0a7371
	  System UUID:                26ed18e3-cae3-43e2-ba2a-85be4a0a7371
	  Boot ID:                    73babc46-f812-4e67-b425-db513a204e97
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  gadget                      gadget-2hn79                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jcwrw    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 amd-gpu-device-plugin-f7qcs                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-w7hjm                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-8sjk8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-535714                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-535714                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-535714       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-z495t                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-535714                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-g4hd4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-knwl8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-535714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-535714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-535714 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-535714 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-535714 event: Registered Node addons-535714 in Controller
	
	
	==> dmesg <==
	[ +33.860109] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.779557] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 2 07:00] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.976810] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.109220] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.510995] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.560914] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.223140] kauditd_printk_skb: 56 callbacks suppressed
	[Oct 2 07:01] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.884695] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.185211] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.060908] kauditd_printk_skb: 58 callbacks suppressed
	[Oct 2 07:02] kauditd_printk_skb: 10 callbacks suppressed
	[  +1.331616] kauditd_printk_skb: 17 callbacks suppressed
	[  +2.250929] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.000032] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 2 07:03] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.099939] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.783953] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 2 07:06] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000320] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 2 07:08] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.251210] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2] <==
	{"level":"warn","ts":"2025-10-02T06:59:59.121300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.835357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:59.121339Z","caller":"traceutil/trace.go:172","msg":"trace[1316712396] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"183.87568ms","start":"2025-10-02T06:59:58.937457Z","end":"2025-10-02T06:59:59.121332Z","steps":["trace[1316712396] 'agreement among raft nodes before linearized reading'  (duration: 183.815946ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:32.832647Z","caller":"traceutil/trace.go:172","msg":"trace[1453851995] linearizableReadLoop","detail":"{readStateIndex:1231; appliedIndex:1231; }","duration":"220.066962ms","start":"2025-10-02T07:00:32.612509Z","end":"2025-10-02T07:00:32.832576Z","steps":["trace[1453851995] 'read index received'  (duration: 220.05963ms)","trace[1453851995] 'applied index is now lower than readState.Index'  (duration: 6.189µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T07:00:32.832730Z","caller":"traceutil/trace.go:172","msg":"trace[302351669] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"243.94686ms","start":"2025-10-02T07:00:32.588772Z","end":"2025-10-02T07:00:32.832719Z","steps":["trace[302351669] 'process raft request'  (duration: 243.833114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:32.832967Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.479862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-02T07:00:32.833001Z","caller":"traceutil/trace.go:172","msg":"trace[1089606970] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1180; }","duration":"220.525584ms","start":"2025-10-02T07:00:32.612469Z","end":"2025-10-02T07:00:32.832995Z","steps":["trace[1089606970] 'agreement among raft nodes before linearized reading'  (duration: 220.422716ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.990824Z","caller":"traceutil/trace.go:172","msg":"trace[1822440841] linearizableReadLoop","detail":"{readStateIndex:1259; appliedIndex:1259; }","duration":"216.288139ms","start":"2025-10-02T07:00:39.774473Z","end":"2025-10-02T07:00:39.990762Z","steps":["trace[1822440841] 'read index received'  (duration: 216.279919ms)","trace[1822440841] 'applied index is now lower than readState.Index'  (duration: 6.642µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:00:39.991358Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.077704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:00:39.991456Z","caller":"traceutil/trace.go:172","msg":"trace[1082597067] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1206; }","duration":"217.190679ms","start":"2025-10-02T07:00:39.774258Z","end":"2025-10-02T07:00:39.991449Z","steps":["trace[1082597067] 'agreement among raft nodes before linearized reading'  (duration: 216.738402ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.992313Z","caller":"traceutil/trace.go:172","msg":"trace[515400758] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"337.963385ms","start":"2025-10-02T07:00:39.654341Z","end":"2025-10-02T07:00:39.992305Z","steps":["trace[515400758] 'process raft request'  (duration: 337.312964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:39.992477Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.654280Z","time spent":"338.099015ms","remote":"127.0.0.1:56776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1205 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-02T07:00:39.994757Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.655974Z","time spent":"338.780211ms","remote":"127.0.0.1:56512","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-10-02T07:02:18.249354Z","caller":"traceutil/trace.go:172","msg":"trace[1937839981] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"110.209012ms","start":"2025-10-02T07:02:18.139042Z","end":"2025-10-02T07:02:18.249251Z","steps":["trace[1937839981] 'process raft request'  (duration: 107.760601ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:02:25.358154Z","caller":"traceutil/trace.go:172","msg":"trace[1514029901] linearizableReadLoop","detail":"{readStateIndex:1683; appliedIndex:1683; }","duration":"269.707219ms","start":"2025-10-02T07:02:25.088427Z","end":"2025-10-02T07:02:25.358135Z","steps":["trace[1514029901] 'read index received'  (duration: 269.698824ms)","trace[1514029901] 'applied index is now lower than readState.Index'  (duration: 7.137µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:02:25.358835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.337456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.358908Z","caller":"traceutil/trace.go:172","msg":"trace[129833481] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1605; }","duration":"270.47424ms","start":"2025-10-02T07:02:25.088423Z","end":"2025-10-02T07:02:25.358898Z","steps":["trace[129833481] 'agreement among raft nodes before linearized reading'  (duration: 270.303097ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.361904Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.156634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.361957Z","caller":"traceutil/trace.go:172","msg":"trace[228810763] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:1605; }","duration":"257.224721ms","start":"2025-10-02T07:02:25.104724Z","end":"2025-10-02T07:02:25.361949Z","steps":["trace[228810763] 'agreement among raft nodes before linearized reading'  (duration: 257.141662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.13527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363670Z","caller":"traceutil/trace.go:172","msg":"trace[2116337020] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1606; }","duration":"129.197912ms","start":"2025-10-02T07:02:25.234464Z","end":"2025-10-02T07:02:25.363662Z","steps":["trace[2116337020] 'agreement among raft nodes before linearized reading'  (duration: 129.113844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.575698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363939Z","caller":"traceutil/trace.go:172","msg":"trace[2132272707] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1606; }","duration":"192.616449ms","start":"2025-10-02T07:02:25.171317Z","end":"2025-10-02T07:02:25.363933Z","steps":["trace[2132272707] 'agreement among raft nodes before linearized reading'  (duration: 192.563634ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:07:46.437056Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1714}
	{"level":"info","ts":"2025-10-02T07:07:46.499568Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1714,"took":"60.828637ms","hash":1204393910,"current-db-size-bytes":5812224,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3612672,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-10-02T07:07:46.499630Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1204393910,"revision":1714,"compact-revision":-1}
	
	
	==> kernel <==
	 07:08:42 up 11 min,  0 users,  load average: 0.43, 0.78, 0.68
	Linux addons-535714 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68] <==
	I1002 07:01:12.978874       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.6.38"}
	I1002 07:01:20.686056       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1002 07:07:44.745143       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:45.754581       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:46.761944       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:47.770723       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1002 07:07:47.825435       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1002 07:07:48.778443       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:49.786351       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:50.793831       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:51.802184       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:52.814820       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:53.822962       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:54.831164       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:55.840848       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:56.848440       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:57.857584       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:58.865881       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:07:59.875606       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:00.882643       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:01.890990       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:02.898772       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:03.911372       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:04.925124       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1002 07:08:05.933262       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20] <==
	I1002 06:57:54.858148       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 06:57:54.858221       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 06:57:54.858258       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 06:57:54.858263       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 06:57:54.858268       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 06:57:54.860904       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:57:54.863351       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 06:57:54.869106       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-535714" podCIDRs=["10.244.0.0/24"]
	E1002 06:58:03.439760       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:58:24.819245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:24.819664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 06:58:24.819801       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 06:58:24.847762       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 06:58:24.855798       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 06:58:24.921306       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:58:24.957046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:58:54.928427       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:54.966681       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 07:01:16.701698       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1002 07:02:37.947143       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E1002 07:07:39.830657       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:07:54.831477       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:08:09.832200       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:08:24.833214       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:08:39.833406       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b] <==
	I1002 06:57:56.940558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:57:57.042011       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:57:57.042117       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.164"]
	E1002 06:57:57.042205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:57:57.167383       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 06:57:57.167427       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 06:57:57.167460       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:57:57.190949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:57:57.192886       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:57:57.192902       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:57:57.294325       1 config.go:200] "Starting service config controller"
	I1002 06:57:57.294358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:57:57.294429       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:57:57.294434       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:57:57.294455       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:57:57.294459       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:57:57.438397       1 config.go:309] "Starting node config controller"
	I1002 06:57:57.441950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:57:57.479963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:57:57.494463       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:57:57.494530       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:57:57.494543       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca] <==
	E1002 06:57:47.853654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:57:47.853709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:47.853767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:57:47.853824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:57:47.854040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:47.855481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:47.854491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.707149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:57:48.761606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:57:48.783806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:57:48.817274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:57:48.856898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:48.856969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 06:57:48.860214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:57:48.880906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.896863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:48.913429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:57:48.964287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:57:48.985241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:57:49.005874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:57:49.118344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:57:49.123456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:49.157781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:57:49.202768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 06:57:51.042340       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:08:10 addons-535714 kubelet[1509]: E1002 07:08:10.177826    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:08:11 addons-535714 kubelet[1509]: E1002 07:08:11.785671    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388891785214703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:11 addons-535714 kubelet[1509]: E1002 07:08:11.785715    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388891785214703  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:12 addons-535714 kubelet[1509]: E1002 07:08:12.388486    1509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 07:08:12 addons-535714 kubelet[1509]: E1002 07:08:12.388570    1509 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 07:08:12 addons-535714 kubelet[1509]: E1002 07:08:12.388797    1509 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3_local-path-storage(43bb5445-e38f-4659-ba34-65c081b7d396): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:08:12 addons-535714 kubelet[1509]: E1002 07:08:12.388853    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3" podUID="43bb5445-e38f-4659-ba34-65c081b7d396"
	Oct 02 07:08:12 addons-535714 kubelet[1509]: I1002 07:08:12.980423    1509 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/43bb5445-e38f-4659-ba34-65c081b7d396-script\") pod \"43bb5445-e38f-4659-ba34-65c081b7d396\" (UID: \"43bb5445-e38f-4659-ba34-65c081b7d396\") "
	Oct 02 07:08:12 addons-535714 kubelet[1509]: I1002 07:08:12.980461    1509 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/43bb5445-e38f-4659-ba34-65c081b7d396-data\") pod \"43bb5445-e38f-4659-ba34-65c081b7d396\" (UID: \"43bb5445-e38f-4659-ba34-65c081b7d396\") "
	Oct 02 07:08:12 addons-535714 kubelet[1509]: I1002 07:08:12.980489    1509 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvtcf\" (UniqueName: \"kubernetes.io/projected/43bb5445-e38f-4659-ba34-65c081b7d396-kube-api-access-nvtcf\") pod \"43bb5445-e38f-4659-ba34-65c081b7d396\" (UID: \"43bb5445-e38f-4659-ba34-65c081b7d396\") "
	Oct 02 07:08:12 addons-535714 kubelet[1509]: I1002 07:08:12.981305    1509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bb5445-e38f-4659-ba34-65c081b7d396-script" (OuterVolumeSpecName: "script") pod "43bb5445-e38f-4659-ba34-65c081b7d396" (UID: "43bb5445-e38f-4659-ba34-65c081b7d396"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 02 07:08:12 addons-535714 kubelet[1509]: I1002 07:08:12.981355    1509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43bb5445-e38f-4659-ba34-65c081b7d396-data" (OuterVolumeSpecName: "data") pod "43bb5445-e38f-4659-ba34-65c081b7d396" (UID: "43bb5445-e38f-4659-ba34-65c081b7d396"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 07:08:12 addons-535714 kubelet[1509]: I1002 07:08:12.985837    1509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43bb5445-e38f-4659-ba34-65c081b7d396-kube-api-access-nvtcf" (OuterVolumeSpecName: "kube-api-access-nvtcf") pod "43bb5445-e38f-4659-ba34-65c081b7d396" (UID: "43bb5445-e38f-4659-ba34-65c081b7d396"). InnerVolumeSpecName "kube-api-access-nvtcf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 07:08:13 addons-535714 kubelet[1509]: I1002 07:08:13.081697    1509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvtcf\" (UniqueName: \"kubernetes.io/projected/43bb5445-e38f-4659-ba34-65c081b7d396-kube-api-access-nvtcf\") on node \"addons-535714\" DevicePath \"\""
	Oct 02 07:08:13 addons-535714 kubelet[1509]: I1002 07:08:13.081735    1509 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/43bb5445-e38f-4659-ba34-65c081b7d396-script\") on node \"addons-535714\" DevicePath \"\""
	Oct 02 07:08:13 addons-535714 kubelet[1509]: I1002 07:08:13.081744    1509 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/43bb5445-e38f-4659-ba34-65c081b7d396-data\") on node \"addons-535714\" DevicePath \"\""
	Oct 02 07:08:15 addons-535714 kubelet[1509]: I1002 07:08:15.183352    1509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43bb5445-e38f-4659-ba34-65c081b7d396" path="/var/lib/kubelet/pods/43bb5445-e38f-4659-ba34-65c081b7d396/volumes"
	Oct 02 07:08:21 addons-535714 kubelet[1509]: E1002 07:08:21.789660    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388901788394864  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:21 addons-535714 kubelet[1509]: E1002 07:08:21.789885    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388901788394864  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:25 addons-535714 kubelet[1509]: E1002 07:08:25.178458    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:08:31 addons-535714 kubelet[1509]: E1002 07:08:31.792292    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388911791783740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:31 addons-535714 kubelet[1509]: E1002 07:08:31.792676    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388911791783740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:36 addons-535714 kubelet[1509]: E1002 07:08:36.175346    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:08:41 addons-535714 kubelet[1509]: E1002 07:08:41.797346    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388921796326232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:08:41 addons-535714 kubelet[1509]: E1002 07:08:41.797922    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388921796326232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	
	
	==> storage-provisioner [0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0] <==
	W1002 07:08:16.893542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:18.898645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:18.911457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:20.916001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:20.922014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:22.925477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:22.931168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:24.934883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:24.943457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:26.947918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:26.954854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:28.960033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:28.966425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:30.970045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:30.976613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:32.980898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:32.990200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:34.994610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:35.000057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:37.004170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:37.009944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:39.013505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:39.020646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:41.025718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:08:41.037788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
helpers_test.go:269: (dbg) Run:  kubectl --context addons-535714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n: exit status 1 (91.032066ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:01:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxhkh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jxhkh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m31s                  default-scheduler  Successfully assigned default/nginx to addons-535714
	  Warning  Failed     5m46s (x2 over 6m28s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m20s (x4 over 7m30s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     61s (x4 over 6m28s)    kubelet            Error: ErrImagePull
	  Warning  Failed     61s (x2 over 3m2s)     kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    7s (x8 over 6m27s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7s (x8 over 6m27s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:02:40 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-znf77 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-znf77:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-535714
	  Normal   BackOff    106s (x2 over 4m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     106s (x2 over 4m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    91s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     1s (x3 over 4m2s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     1s (x3 over 4m2s)    kubelet            Error: ErrImagePull
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g48lf (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-g48lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jsw7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-46z2n" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 addons disable volumesnapshots --alsologtostderr -v=1: (1.134424197s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.89824932s)
--- FAIL: TestAddons/parallel/CSI (384.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (343.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-535714 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-535714 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-535714 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-535714 -n addons-535714
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 logs -n 25: (1.493786667s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ -o=json --download-only -p download-only-169608 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ --download-only -p binary-mirror-257523 --alsologtostderr --binary-mirror http://127.0.0.1:33567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ -p binary-mirror-257523                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ addons  │ enable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ start   │ -p addons-535714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ enable headlamp -p addons-535714 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ ip      │ addons-535714 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │ 02 Oct 25 07:03 UTC │
	│ addons  │ addons-535714 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:03 UTC │ 02 Oct 25 07:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:57:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:57:12.613104  566681 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:57:12.613401  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613412  566681 out.go:374] Setting ErrFile to fd 2...
	I1002 06:57:12.613416  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613691  566681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 06:57:12.614327  566681 out.go:368] Setting JSON to false
	I1002 06:57:12.615226  566681 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49183,"bootTime":1759339050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:57:12.615318  566681 start.go:140] virtualization: kvm guest
	I1002 06:57:12.616912  566681 out.go:179] * [addons-535714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:57:12.618030  566681 notify.go:220] Checking for updates...
	I1002 06:57:12.618070  566681 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:57:12.619267  566681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:57:12.620404  566681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:12.621815  566681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:12.622922  566681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:57:12.623998  566681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:57:12.625286  566681 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:57:12.655279  566681 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 06:57:12.656497  566681 start.go:304] selected driver: kvm2
	I1002 06:57:12.656511  566681 start.go:924] validating driver "kvm2" against <nil>
	I1002 06:57:12.656523  566681 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:57:12.657469  566681 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.657563  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.671466  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.671499  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.684735  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.684785  566681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:57:12.685037  566681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:57:12.685069  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:12.685110  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:12.685121  566681 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 06:57:12.685226  566681 start.go:348] cluster config:
	{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1002 06:57:12.685336  566681 iso.go:125] acquiring lock: {Name:mkf098c9edb59acf17bed04e42333d4ed092b943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.687549  566681 out.go:179] * Starting "addons-535714" primary control-plane node in "addons-535714" cluster
	I1002 06:57:12.688758  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:12.688809  566681 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:57:12.688824  566681 cache.go:58] Caching tarball of preloaded images
	I1002 06:57:12.688927  566681 preload.go:233] Found /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:57:12.688941  566681 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:57:12.689355  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:12.689385  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json: {Name:mkd226c1b0f282f7928061e8123511cda66ecb61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:12.689560  566681 start.go:360] acquireMachinesLock for addons-535714: {Name:mk200887a2360c0adfa27edc65d8cb08bb2838a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 06:57:12.689631  566681 start.go:364] duration metric: took 53.377µs to acquireMachinesLock for "addons-535714"
	I1002 06:57:12.689654  566681 start.go:93] Provisioning new machine with config: &{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:12.689738  566681 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 06:57:12.691999  566681 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 06:57:12.692183  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:12.692244  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:12.705101  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I1002 06:57:12.705724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:12.706300  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:12.706320  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:12.706770  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:12.707010  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:12.707209  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:12.707401  566681 start.go:159] libmachine.API.Create for "addons-535714" (driver="kvm2")
	I1002 06:57:12.707450  566681 client.go:168] LocalClient.Create starting
	I1002 06:57:12.707494  566681 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem
	I1002 06:57:12.888250  566681 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem
	I1002 06:57:13.081005  566681 main.go:141] libmachine: Running pre-create checks...
	I1002 06:57:13.081030  566681 main.go:141] libmachine: (addons-535714) Calling .PreCreateCheck
	I1002 06:57:13.081598  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:13.082053  566681 main.go:141] libmachine: Creating machine...
	I1002 06:57:13.082069  566681 main.go:141] libmachine: (addons-535714) Calling .Create
	I1002 06:57:13.082276  566681 main.go:141] libmachine: (addons-535714) creating domain...
	I1002 06:57:13.082300  566681 main.go:141] libmachine: (addons-535714) creating network...
	I1002 06:57:13.083762  566681 main.go:141] libmachine: (addons-535714) DBG | found existing default network
	I1002 06:57:13.084004  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084021  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>default</name>
	I1002 06:57:13.084029  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 06:57:13.084036  566681 main.go:141] libmachine: (addons-535714) DBG |   <forward mode='nat'>
	I1002 06:57:13.084041  566681 main.go:141] libmachine: (addons-535714) DBG |     <nat>
	I1002 06:57:13.084047  566681 main.go:141] libmachine: (addons-535714) DBG |       <port start='1024' end='65535'/>
	I1002 06:57:13.084051  566681 main.go:141] libmachine: (addons-535714) DBG |     </nat>
	I1002 06:57:13.084055  566681 main.go:141] libmachine: (addons-535714) DBG |   </forward>
	I1002 06:57:13.084061  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 06:57:13.084068  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 06:57:13.084084  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 06:57:13.084098  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084111  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 06:57:13.084123  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084131  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084152  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084191  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084749  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.084601  566709 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1002 06:57:13.084771  566681 main.go:141] libmachine: (addons-535714) DBG | defining private network:
	I1002 06:57:13.084780  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084785  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084801  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.084820  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.084831  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.084840  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084851  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.084861  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084868  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084878  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084888  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.090767  566681 main.go:141] libmachine: (addons-535714) DBG | creating private network mk-addons-535714 192.168.39.0/24...
	I1002 06:57:13.158975  566681 main.go:141] libmachine: (addons-535714) DBG | private network mk-addons-535714 192.168.39.0/24 created
	I1002 06:57:13.159275  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.159307  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.159316  566681 main.go:141] libmachine: (addons-535714) setting up store path in /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.159335  566681 main.go:141] libmachine: (addons-535714) building disk image from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 06:57:13.159343  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>30f68bcb-0ec3-45ac-9012-251c5feb215b</uuid>
	I1002 06:57:13.159350  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 06:57:13.159356  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:03:a3:ce'/>
	I1002 06:57:13.159360  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.159383  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.159402  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.159413  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.159428  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.159461  566681 main.go:141] libmachine: (addons-535714) Downloading /home/jenkins/minikube-integration/21643-562157/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 06:57:13.159477  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.159489  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.159500  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.159522  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.159293  566709 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.427161  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.426986  566709 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa...
	I1002 06:57:13.691596  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691434  566709 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk...
	I1002 06:57:13.691620  566681 main.go:141] libmachine: (addons-535714) DBG | Writing magic tar header
	I1002 06:57:13.691651  566681 main.go:141] libmachine: (addons-535714) DBG | Writing SSH key tar header
	I1002 06:57:13.691660  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691559  566709 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.691671  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714
	I1002 06:57:13.691678  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines
	I1002 06:57:13.691687  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.691694  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157
	I1002 06:57:13.691702  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 06:57:13.691710  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins
	I1002 06:57:13.691724  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 (perms=drwx------)
	I1002 06:57:13.691738  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines (perms=drwxr-xr-x)
	I1002 06:57:13.691747  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home
	I1002 06:57:13.691758  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube (perms=drwxr-xr-x)
	I1002 06:57:13.691769  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157 (perms=drwxrwxr-x)
	I1002 06:57:13.691781  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 06:57:13.691789  566681 main.go:141] libmachine: (addons-535714) DBG | skipping /home - not owner
	I1002 06:57:13.691803  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 06:57:13.691811  566681 main.go:141] libmachine: (addons-535714) defining domain...
	I1002 06:57:13.693046  566681 main.go:141] libmachine: (addons-535714) defining domain using XML: 
	I1002 06:57:13.693074  566681 main.go:141] libmachine: (addons-535714) <domain type='kvm'>
	I1002 06:57:13.693080  566681 main.go:141] libmachine: (addons-535714)   <name>addons-535714</name>
	I1002 06:57:13.693085  566681 main.go:141] libmachine: (addons-535714)   <memory unit='MiB'>4096</memory>
	I1002 06:57:13.693090  566681 main.go:141] libmachine: (addons-535714)   <vcpu>2</vcpu>
	I1002 06:57:13.693093  566681 main.go:141] libmachine: (addons-535714)   <features>
	I1002 06:57:13.693098  566681 main.go:141] libmachine: (addons-535714)     <acpi/>
	I1002 06:57:13.693102  566681 main.go:141] libmachine: (addons-535714)     <apic/>
	I1002 06:57:13.693109  566681 main.go:141] libmachine: (addons-535714)     <pae/>
	I1002 06:57:13.693115  566681 main.go:141] libmachine: (addons-535714)   </features>
	I1002 06:57:13.693124  566681 main.go:141] libmachine: (addons-535714)   <cpu mode='host-passthrough'>
	I1002 06:57:13.693132  566681 main.go:141] libmachine: (addons-535714)   </cpu>
	I1002 06:57:13.693155  566681 main.go:141] libmachine: (addons-535714)   <os>
	I1002 06:57:13.693163  566681 main.go:141] libmachine: (addons-535714)     <type>hvm</type>
	I1002 06:57:13.693172  566681 main.go:141] libmachine: (addons-535714)     <boot dev='cdrom'/>
	I1002 06:57:13.693186  566681 main.go:141] libmachine: (addons-535714)     <boot dev='hd'/>
	I1002 06:57:13.693192  566681 main.go:141] libmachine: (addons-535714)     <bootmenu enable='no'/>
	I1002 06:57:13.693197  566681 main.go:141] libmachine: (addons-535714)   </os>
	I1002 06:57:13.693202  566681 main.go:141] libmachine: (addons-535714)   <devices>
	I1002 06:57:13.693207  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='cdrom'>
	I1002 06:57:13.693215  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.693220  566681 main.go:141] libmachine: (addons-535714)       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.693225  566681 main.go:141] libmachine: (addons-535714)       <readonly/>
	I1002 06:57:13.693231  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693240  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='disk'>
	I1002 06:57:13.693255  566681 main.go:141] libmachine: (addons-535714)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 06:57:13.693309  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.693334  566681 main.go:141] libmachine: (addons-535714)       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.693341  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693357  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693371  566681 main.go:141] libmachine: (addons-535714)       <source network='mk-addons-535714'/>
	I1002 06:57:13.693378  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693391  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693399  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693411  566681 main.go:141] libmachine: (addons-535714)       <source network='default'/>
	I1002 06:57:13.693416  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693435  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693445  566681 main.go:141] libmachine: (addons-535714)     <serial type='pty'>
	I1002 06:57:13.693480  566681 main.go:141] libmachine: (addons-535714)       <target port='0'/>
	I1002 06:57:13.693520  566681 main.go:141] libmachine: (addons-535714)     </serial>
	I1002 06:57:13.693540  566681 main.go:141] libmachine: (addons-535714)     <console type='pty'>
	I1002 06:57:13.693552  566681 main.go:141] libmachine: (addons-535714)       <target type='serial' port='0'/>
	I1002 06:57:13.693564  566681 main.go:141] libmachine: (addons-535714)     </console>
	I1002 06:57:13.693575  566681 main.go:141] libmachine: (addons-535714)     <rng model='virtio'>
	I1002 06:57:13.693588  566681 main.go:141] libmachine: (addons-535714)       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.693598  566681 main.go:141] libmachine: (addons-535714)     </rng>
	I1002 06:57:13.693609  566681 main.go:141] libmachine: (addons-535714)   </devices>
	I1002 06:57:13.693618  566681 main.go:141] libmachine: (addons-535714) </domain>
	I1002 06:57:13.693631  566681 main.go:141] libmachine: (addons-535714) 
	I1002 06:57:13.698471  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:ff:9b:2c in network default
	I1002 06:57:13.699181  566681 main.go:141] libmachine: (addons-535714) starting domain...
	I1002 06:57:13.699210  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:13.699219  566681 main.go:141] libmachine: (addons-535714) ensuring networks are active...
	I1002 06:57:13.699886  566681 main.go:141] libmachine: (addons-535714) Ensuring network default is active
	I1002 06:57:13.700240  566681 main.go:141] libmachine: (addons-535714) Ensuring network mk-addons-535714 is active
	I1002 06:57:13.700911  566681 main.go:141] libmachine: (addons-535714) getting domain XML...
	I1002 06:57:13.701998  566681 main.go:141] libmachine: (addons-535714) DBG | starting domain XML:
	I1002 06:57:13.702019  566681 main.go:141] libmachine: (addons-535714) DBG | <domain type='kvm'>
	I1002 06:57:13.702029  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>addons-535714</name>
	I1002 06:57:13.702036  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>26ed18e3-cae3-43e2-ba2a-85be4a0a7371</uuid>
	I1002 06:57:13.702049  566681 main.go:141] libmachine: (addons-535714) DBG |   <memory unit='KiB'>4194304</memory>
	I1002 06:57:13.702060  566681 main.go:141] libmachine: (addons-535714) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1002 06:57:13.702069  566681 main.go:141] libmachine: (addons-535714) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 06:57:13.702075  566681 main.go:141] libmachine: (addons-535714) DBG |   <os>
	I1002 06:57:13.702085  566681 main.go:141] libmachine: (addons-535714) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 06:57:13.702093  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='cdrom'/>
	I1002 06:57:13.702101  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='hd'/>
	I1002 06:57:13.702116  566681 main.go:141] libmachine: (addons-535714) DBG |     <bootmenu enable='no'/>
	I1002 06:57:13.702127  566681 main.go:141] libmachine: (addons-535714) DBG |   </os>
	I1002 06:57:13.702134  566681 main.go:141] libmachine: (addons-535714) DBG |   <features>
	I1002 06:57:13.702180  566681 main.go:141] libmachine: (addons-535714) DBG |     <acpi/>
	I1002 06:57:13.702204  566681 main.go:141] libmachine: (addons-535714) DBG |     <apic/>
	I1002 06:57:13.702215  566681 main.go:141] libmachine: (addons-535714) DBG |     <pae/>
	I1002 06:57:13.702220  566681 main.go:141] libmachine: (addons-535714) DBG |   </features>
	I1002 06:57:13.702241  566681 main.go:141] libmachine: (addons-535714) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 06:57:13.702256  566681 main.go:141] libmachine: (addons-535714) DBG |   <clock offset='utc'/>
	I1002 06:57:13.702265  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 06:57:13.702283  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_reboot>restart</on_reboot>
	I1002 06:57:13.702295  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_crash>destroy</on_crash>
	I1002 06:57:13.702305  566681 main.go:141] libmachine: (addons-535714) DBG |   <devices>
	I1002 06:57:13.702317  566681 main.go:141] libmachine: (addons-535714) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 06:57:13.702328  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='cdrom'>
	I1002 06:57:13.702340  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw'/>
	I1002 06:57:13.702352  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.702364  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.702375  566681 main.go:141] libmachine: (addons-535714) DBG |       <readonly/>
	I1002 06:57:13.702387  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 06:57:13.702398  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702419  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='disk'>
	I1002 06:57:13.702432  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 06:57:13.702451  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.702462  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.702472  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 06:57:13.702482  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702490  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 06:57:13.702503  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 06:57:13.702512  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702521  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 06:57:13.702535  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 06:57:13.702589  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 06:57:13.702612  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702624  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702630  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:00:74:bc'/>
	I1002 06:57:13.702639  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='mk-addons-535714'/>
	I1002 06:57:13.702646  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702658  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 06:57:13.702665  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702675  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702687  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:ff:9b:2c'/>
	I1002 06:57:13.702697  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='default'/>
	I1002 06:57:13.702707  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702719  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 06:57:13.702730  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702740  566681 main.go:141] libmachine: (addons-535714) DBG |     <serial type='pty'>
	I1002 06:57:13.702751  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='isa-serial' port='0'>
	I1002 06:57:13.702765  566681 main.go:141] libmachine: (addons-535714) DBG |         <model name='isa-serial'/>
	I1002 06:57:13.702775  566681 main.go:141] libmachine: (addons-535714) DBG |       </target>
	I1002 06:57:13.702784  566681 main.go:141] libmachine: (addons-535714) DBG |     </serial>
	I1002 06:57:13.702806  566681 main.go:141] libmachine: (addons-535714) DBG |     <console type='pty'>
	I1002 06:57:13.702820  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='serial' port='0'/>
	I1002 06:57:13.702827  566681 main.go:141] libmachine: (addons-535714) DBG |     </console>
	I1002 06:57:13.702839  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='mouse' bus='ps2'/>
	I1002 06:57:13.702850  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 06:57:13.702861  566681 main.go:141] libmachine: (addons-535714) DBG |     <audio id='1' type='none'/>
	I1002 06:57:13.702881  566681 main.go:141] libmachine: (addons-535714) DBG |     <memballoon model='virtio'>
	I1002 06:57:13.702895  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 06:57:13.702901  566681 main.go:141] libmachine: (addons-535714) DBG |     </memballoon>
	I1002 06:57:13.702910  566681 main.go:141] libmachine: (addons-535714) DBG |     <rng model='virtio'>
	I1002 06:57:13.702918  566681 main.go:141] libmachine: (addons-535714) DBG |       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.702929  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 06:57:13.702944  566681 main.go:141] libmachine: (addons-535714) DBG |     </rng>
	I1002 06:57:13.702957  566681 main.go:141] libmachine: (addons-535714) DBG |   </devices>
	I1002 06:57:13.702972  566681 main.go:141] libmachine: (addons-535714) DBG | </domain>
	I1002 06:57:13.702987  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:14.963247  566681 main.go:141] libmachine: (addons-535714) waiting for domain to start...
	I1002 06:57:14.964664  566681 main.go:141] libmachine: (addons-535714) domain is now running
	I1002 06:57:14.964695  566681 main.go:141] libmachine: (addons-535714) waiting for IP...
	I1002 06:57:14.965420  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:14.966032  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:14.966060  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:14.966362  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:14.966431  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:14.966367  566709 retry.go:31] will retry after 210.201926ms: waiting for domain to come up
	I1002 06:57:15.178058  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.178797  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.178832  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.179051  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.179089  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.179030  566709 retry.go:31] will retry after 312.318729ms: waiting for domain to come up
	I1002 06:57:15.493036  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.493844  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.493865  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.494158  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.494260  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.494172  566709 retry.go:31] will retry after 379.144998ms: waiting for domain to come up
	I1002 06:57:15.874866  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.875597  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.875618  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.875940  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.875972  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.875891  566709 retry.go:31] will retry after 392.719807ms: waiting for domain to come up
	I1002 06:57:16.270678  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.271369  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.271417  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.271795  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.271822  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.271752  566709 retry.go:31] will retry after 502.852746ms: waiting for domain to come up
	I1002 06:57:16.776382  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.777033  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.777083  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.777418  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.777452  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.777390  566709 retry.go:31] will retry after 817.041708ms: waiting for domain to come up
	I1002 06:57:17.596403  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:17.597002  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:17.597037  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:17.597304  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:17.597337  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:17.597286  566709 retry.go:31] will retry after 1.129250566s: waiting for domain to come up
	I1002 06:57:18.728727  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:18.729410  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:18.729438  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:18.729739  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:18.729770  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:18.729716  566709 retry.go:31] will retry after 1.486801145s: waiting for domain to come up
	I1002 06:57:20.218801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:20.219514  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:20.219546  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:20.219811  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:20.219864  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:20.219802  566709 retry.go:31] will retry after 1.676409542s: waiting for domain to come up
	I1002 06:57:21.898812  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:21.899513  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:21.899536  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:21.899819  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:21.899877  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:21.899808  566709 retry.go:31] will retry after 1.43578276s: waiting for domain to come up
	I1002 06:57:23.337598  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:23.338214  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:23.338235  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:23.338569  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:23.338642  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:23.338553  566709 retry.go:31] will retry after 2.182622976s: waiting for domain to come up
	I1002 06:57:25.524305  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:25.524996  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:25.525030  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:25.525352  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:25.525383  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:25.525329  566709 retry.go:31] will retry after 2.567637867s: waiting for domain to come up
	I1002 06:57:28.094839  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:28.095351  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:28.095371  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:28.095666  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:28.095696  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:28.095635  566709 retry.go:31] will retry after 3.838879921s: waiting for domain to come up
	I1002 06:57:31.938799  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939560  566681 main.go:141] libmachine: (addons-535714) found domain IP: 192.168.39.164
	I1002 06:57:31.939593  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has current primary IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939601  566681 main.go:141] libmachine: (addons-535714) reserving static IP address...
	I1002 06:57:31.940101  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find host DHCP lease matching {name: "addons-535714", mac: "52:54:00:00:74:bc", ip: "192.168.39.164"} in network mk-addons-535714
	I1002 06:57:32.153010  566681 main.go:141] libmachine: (addons-535714) DBG | Getting to WaitForSSH function...
	I1002 06:57:32.153043  566681 main.go:141] libmachine: (addons-535714) reserved static IP address 192.168.39.164 for domain addons-535714
	I1002 06:57:32.153056  566681 main.go:141] libmachine: (addons-535714) waiting for SSH...
	I1002 06:57:32.156675  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157263  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.157288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157522  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH client type: external
	I1002 06:57:32.157548  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH private key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa (-rw-------)
	I1002 06:57:32.157582  566681 main.go:141] libmachine: (addons-535714) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 06:57:32.157609  566681 main.go:141] libmachine: (addons-535714) DBG | About to run SSH command:
	I1002 06:57:32.157620  566681 main.go:141] libmachine: (addons-535714) DBG | exit 0
	I1002 06:57:32.286418  566681 main.go:141] libmachine: (addons-535714) DBG | SSH cmd err, output: <nil>: 
	I1002 06:57:32.286733  566681 main.go:141] libmachine: (addons-535714) domain creation complete
	I1002 06:57:32.287044  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:32.287640  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288020  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288207  566681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 06:57:32.288223  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:32.289782  566681 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 06:57:32.289795  566681 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 06:57:32.289800  566681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 06:57:32.289805  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.292433  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.292851  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.292897  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.293050  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.293317  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293481  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293658  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.293813  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.294063  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.294076  566681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 06:57:32.392654  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.392681  566681 main.go:141] libmachine: Detecting the provisioner...
	I1002 06:57:32.392690  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.396029  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396454  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.396486  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396681  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.396903  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397079  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397260  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.397412  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.397680  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.397696  566681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 06:57:32.501992  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 06:57:32.502093  566681 main.go:141] libmachine: found compatible host: buildroot
	I1002 06:57:32.502117  566681 main.go:141] libmachine: Provisioning with buildroot...
	I1002 06:57:32.502131  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502439  566681 buildroot.go:166] provisioning hostname "addons-535714"
	I1002 06:57:32.502476  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502701  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.506170  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506653  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.506716  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506786  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.507040  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507252  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507426  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.507729  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.507997  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.508013  566681 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-535714 && echo "addons-535714" | sudo tee /etc/hostname
	I1002 06:57:32.632360  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-535714
	
	I1002 06:57:32.632404  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.635804  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636293  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.636319  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636574  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.636804  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.636969  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.637110  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.637297  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.637584  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.637613  566681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-535714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-535714/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-535714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:57:32.752063  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.752119  566681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21643-562157/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-562157/.minikube}
	I1002 06:57:32.752193  566681 buildroot.go:174] setting up certificates
	I1002 06:57:32.752210  566681 provision.go:84] configureAuth start
	I1002 06:57:32.752256  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.752721  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:32.756026  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756514  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.756545  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756704  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.759506  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.759945  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.759972  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.760113  566681 provision.go:143] copyHostCerts
	I1002 06:57:32.760210  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem (1123 bytes)
	I1002 06:57:32.760331  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem (1675 bytes)
	I1002 06:57:32.760392  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem (1078 bytes)
	I1002 06:57:32.760440  566681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem org=jenkins.addons-535714 san=[127.0.0.1 192.168.39.164 addons-535714 localhost minikube]
	I1002 06:57:32.997259  566681 provision.go:177] copyRemoteCerts
	I1002 06:57:32.997339  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:57:32.997365  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.001746  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002246  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.002275  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002606  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.002841  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.003067  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.003261  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.087811  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:57:33.120074  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:57:33.152344  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:57:33.183560  566681 provision.go:87] duration metric: took 431.305231ms to configureAuth
	I1002 06:57:33.183592  566681 buildroot.go:189] setting minikube options for container-runtime
	I1002 06:57:33.183785  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:33.183901  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.187438  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.187801  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.187825  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.188034  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.188285  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188508  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188682  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.188927  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.189221  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.189246  566681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:57:33.455871  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:57:33.455896  566681 main.go:141] libmachine: Checking connection to Docker...
	I1002 06:57:33.455904  566681 main.go:141] libmachine: (addons-535714) Calling .GetURL
	I1002 06:57:33.457296  566681 main.go:141] libmachine: (addons-535714) DBG | using libvirt version 8000000
	I1002 06:57:33.460125  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460550  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.460582  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460738  566681 main.go:141] libmachine: Docker is up and running!
	I1002 06:57:33.460770  566681 main.go:141] libmachine: Reticulating splines...
	I1002 06:57:33.460780  566681 client.go:171] duration metric: took 20.753318284s to LocalClient.Create
	I1002 06:57:33.460805  566681 start.go:167] duration metric: took 20.753406484s to libmachine.API.Create "addons-535714"
	I1002 06:57:33.460815  566681 start.go:293] postStartSetup for "addons-535714" (driver="kvm2")
	I1002 06:57:33.460824  566681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:57:33.460841  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.461104  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:57:33.461149  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.463666  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464001  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.464024  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.464486  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.464662  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.464805  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.547032  566681 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:57:33.552379  566681 info.go:137] Remote host: Buildroot 2025.02
	I1002 06:57:33.552408  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/addons for local assets ...
	I1002 06:57:33.552489  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/files for local assets ...
	I1002 06:57:33.552524  566681 start.go:296] duration metric: took 91.702797ms for postStartSetup
	I1002 06:57:33.552573  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:33.553229  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.556294  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556659  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.556691  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556979  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:33.557200  566681 start.go:128] duration metric: took 20.867433906s to createHost
	I1002 06:57:33.557235  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.559569  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.559976  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.560033  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.560209  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.560387  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560524  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560647  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.560782  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.561006  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.561024  566681 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 06:57:33.663941  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759388253.625480282
	
	I1002 06:57:33.663966  566681 fix.go:216] guest clock: 1759388253.625480282
	I1002 06:57:33.663974  566681 fix.go:229] Guest: 2025-10-02 06:57:33.625480282 +0000 UTC Remote: 2025-10-02 06:57:33.557215192 +0000 UTC m=+20.980868887 (delta=68.26509ms)
	I1002 06:57:33.664010  566681 fix.go:200] guest clock delta is within tolerance: 68.26509ms
	I1002 06:57:33.664022  566681 start.go:83] releasing machines lock for "addons-535714", held for 20.974372731s
	I1002 06:57:33.664050  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.664374  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.667827  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668310  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.668344  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668518  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669079  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669275  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669418  566681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:57:33.669466  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.669473  566681 ssh_runner.go:195] Run: cat /version.json
	I1002 06:57:33.669492  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.672964  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673168  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673457  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673495  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673642  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673878  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674001  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.674093  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674177  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674268  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.674352  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674502  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.752747  566681 ssh_runner.go:195] Run: systemctl --version
	I1002 06:57:33.777712  566681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:57:33.941402  566681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:57:33.949414  566681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:57:33.949490  566681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:57:33.971089  566681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:57:33.971121  566681 start.go:495] detecting cgroup driver to use...
	I1002 06:57:33.971215  566681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:57:33.990997  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:57:34.009642  566681 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:57:34.009719  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:57:34.028675  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:57:34.045011  566681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:57:34.191090  566681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:57:34.404836  566681 docker.go:234] disabling docker service ...
	I1002 06:57:34.404915  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:57:34.421846  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:57:34.437815  566681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:57:34.593256  566681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:57:34.739807  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:57:34.755656  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:57:34.780318  566681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:57:34.780381  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.794344  566681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 06:57:34.794437  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.807921  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.821174  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.834265  566681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:57:34.848039  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.861013  566681 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.882928  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.895874  566681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:57:34.906834  566681 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 06:57:34.906902  566681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 06:57:34.930283  566681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:57:34.944196  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:35.086744  566681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:57:35.203118  566681 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:57:35.203247  566681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:57:35.208872  566681 start.go:563] Will wait 60s for crictl version
	I1002 06:57:35.208951  566681 ssh_runner.go:195] Run: which crictl
	I1002 06:57:35.213165  566681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 06:57:35.254690  566681 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 06:57:35.254809  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.285339  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.318360  566681 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 06:57:35.319680  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:35.322840  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323187  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:35.323215  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323541  566681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 06:57:35.328294  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:35.344278  566681 kubeadm.go:883] updating cluster {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:57:35.344381  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:35.344426  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:35.382419  566681 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 06:57:35.382487  566681 ssh_runner.go:195] Run: which lz4
	I1002 06:57:35.386980  566681 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 06:57:35.392427  566681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 06:57:35.392457  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1002 06:57:36.901929  566681 crio.go:462] duration metric: took 1.514994717s to copy over tarball
	I1002 06:57:36.902020  566681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 06:57:38.487982  566681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.585912508s)
	I1002 06:57:38.488018  566681 crio.go:469] duration metric: took 1.586055344s to extract the tarball
	I1002 06:57:38.488028  566681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 06:57:38.530041  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:38.574743  566681 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:57:38.574771  566681 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:57:38.574780  566681 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.34.1 crio true true} ...
	I1002 06:57:38.574907  566681 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-535714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:57:38.574982  566681 ssh_runner.go:195] Run: crio config
	I1002 06:57:38.626077  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:38.626100  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:38.626114  566681 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:57:38.626157  566681 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-535714 NodeName:addons-535714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:57:38.626290  566681 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-535714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:57:38.626379  566681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:57:38.638875  566681 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:57:38.638942  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:57:38.650923  566681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1002 06:57:38.672765  566681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:57:38.695198  566681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 06:57:38.716738  566681 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I1002 06:57:38.721153  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:38.736469  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:38.882003  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:38.903662  566681 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714 for IP: 192.168.39.164
	I1002 06:57:38.903695  566681 certs.go:195] generating shared ca certs ...
	I1002 06:57:38.903722  566681 certs.go:227] acquiring lock for ca certs: {Name:mk8e87648e070d331709ecc08a93a441c20cc0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.903919  566681 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key
	I1002 06:57:38.961629  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt ...
	I1002 06:57:38.961659  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt: {Name:mkce3dd067e2e7843e2a288d28dbaf57f057aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961829  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key ...
	I1002 06:57:38.961841  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key: {Name:mka327360c05168b3164194068242bb15d511ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961939  566681 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key
	I1002 06:57:39.050167  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt ...
	I1002 06:57:39.050199  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt: {Name:mkf18fa19ddf5ebcd4669a9a2e369e414c03725b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050375  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key ...
	I1002 06:57:39.050388  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key: {Name:mk774f61354e64c5344d2d0d059164fff9076c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050460  566681 certs.go:257] generating profile certs ...
	I1002 06:57:39.050516  566681 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key
	I1002 06:57:39.050537  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt with IP's: []
	I1002 06:57:39.147298  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt ...
	I1002 06:57:39.147330  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: {Name:mk17b498d515b2f43666faa03b17d7223c9a8157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147495  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key ...
	I1002 06:57:39.147505  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key: {Name:mke1e8140b8916f87dd85d98abe8a51503f6e4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147578  566681 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed
	I1002 06:57:39.147597  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164]
	I1002 06:57:39.310236  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed ...
	I1002 06:57:39.310266  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed: {Name:mk247c08955d8ed7427926c7244db21ffe837768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310428  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed ...
	I1002 06:57:39.310441  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed: {Name:mkc3fa16c2fd82a07eac700fa655e28a42c60f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310525  566681 certs.go:382] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt
	I1002 06:57:39.310624  566681 certs.go:386] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key
	I1002 06:57:39.310682  566681 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key
	I1002 06:57:39.310701  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt with IP's: []
	I1002 06:57:39.497350  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt ...
	I1002 06:57:39.497386  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt: {Name:mk4f28529f4cee1ff8311028b7bb7fc35a77bba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497555  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key ...
	I1002 06:57:39.497569  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key: {Name:mkfac0b0a329edb8634114371202cb4ba011c129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497750  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:57:39.497784  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:57:39.497808  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:57:39.497835  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem (1675 bytes)
	I1002 06:57:39.498475  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:57:39.530649  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:57:39.561340  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:57:39.593844  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:57:39.629628  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:57:39.668367  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:57:39.699924  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:57:39.730177  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:57:39.761107  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:57:39.791592  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:57:39.813294  566681 ssh_runner.go:195] Run: openssl version
	I1002 06:57:39.820587  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:57:39.834664  566681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840283  566681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:57 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840348  566681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.848412  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:57:39.863027  566681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:57:39.868269  566681 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:57:39.868325  566681 kubeadm.go:400] StartCluster: {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:57:39.868408  566681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:57:39.868500  566681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:57:39.910571  566681 cri.go:89] found id: ""
	I1002 06:57:39.910645  566681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:57:39.923825  566681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:57:39.936522  566681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:57:39.949191  566681 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:57:39.949214  566681 kubeadm.go:157] found existing configuration files:
	
	I1002 06:57:39.949292  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:57:39.961561  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:57:39.961637  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:57:39.974337  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:57:39.986029  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:57:39.986104  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:57:39.997992  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.008894  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:57:40.008966  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.021235  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:57:40.032694  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:57:40.032754  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:57:40.045554  566681 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 06:57:40.211362  566681 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:57:51.799597  566681 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:57:51.799689  566681 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:57:51.799798  566681 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:57:51.799950  566681 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:57:51.800082  566681 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:57:51.800206  566681 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:57:51.802349  566681 out.go:252]   - Generating certificates and keys ...
	I1002 06:57:51.802439  566681 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:57:51.802492  566681 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:57:51.802586  566681 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:57:51.802729  566681 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:57:51.802823  566681 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:57:51.802894  566681 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:57:51.802944  566681 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:57:51.803058  566681 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803125  566681 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:57:51.803276  566681 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803350  566681 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:57:51.803420  566681 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:57:51.803491  566681 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:57:51.803557  566681 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:57:51.803634  566681 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:57:51.803717  566681 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:57:51.803807  566681 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:57:51.803899  566681 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:57:51.803950  566681 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:57:51.804029  566681 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:57:51.804088  566681 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:57:51.805702  566681 out.go:252]   - Booting up control plane ...
	I1002 06:57:51.805781  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:57:51.805846  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:57:51.805929  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:57:51.806028  566681 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:57:51.806148  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:57:51.806260  566681 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:57:51.806361  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:57:51.806420  566681 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:57:51.806575  566681 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:57:51.806669  566681 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:57:51.806717  566681 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.672587ms
	I1002 06:57:51.806806  566681 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:57:51.806892  566681 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.164:8443/livez
	I1002 06:57:51.806963  566681 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:57:51.807067  566681 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:57:51.807185  566681 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.362189492s
	I1002 06:57:51.807284  566681 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.802664802s
	I1002 06:57:51.807338  566681 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003805488s
	I1002 06:57:51.807453  566681 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:57:51.807587  566681 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:57:51.807642  566681 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:57:51.807816  566681 kubeadm.go:318] [mark-control-plane] Marking the node addons-535714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:57:51.807890  566681 kubeadm.go:318] [bootstrap-token] Using token: 7tuk3k.1448ee54qv9op8vd
	I1002 06:57:51.810266  566681 out.go:252]   - Configuring RBAC rules ...
	I1002 06:57:51.810355  566681 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:57:51.810443  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:57:51.810582  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:57:51.810746  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:57:51.810922  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:57:51.811039  566681 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:57:51.811131  566681 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:57:51.811203  566681 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:57:51.811259  566681 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:57:51.811271  566681 kubeadm.go:318] 
	I1002 06:57:51.811321  566681 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:57:51.811327  566681 kubeadm.go:318] 
	I1002 06:57:51.811408  566681 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:57:51.811416  566681 kubeadm.go:318] 
	I1002 06:57:51.811438  566681 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:57:51.811524  566681 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:57:51.811568  566681 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:57:51.811574  566681 kubeadm.go:318] 
	I1002 06:57:51.811638  566681 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:57:51.811650  566681 kubeadm.go:318] 
	I1002 06:57:51.811704  566681 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:57:51.811711  566681 kubeadm.go:318] 
	I1002 06:57:51.811751  566681 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:57:51.811811  566681 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:57:51.811912  566681 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:57:51.811926  566681 kubeadm.go:318] 
	I1002 06:57:51.812042  566681 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:57:51.812153  566681 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:57:51.812165  566681 kubeadm.go:318] 
	I1002 06:57:51.812280  566681 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812417  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 \
	I1002 06:57:51.812453  566681 kubeadm.go:318] 	--control-plane 
	I1002 06:57:51.812464  566681 kubeadm.go:318] 
	I1002 06:57:51.812595  566681 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:57:51.812615  566681 kubeadm.go:318] 
	I1002 06:57:51.812711  566681 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812863  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 
	I1002 06:57:51.812931  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:51.812944  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:51.815686  566681 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 06:57:51.817060  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 06:57:51.834402  566681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 06:57:51.858951  566681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:57:51.859117  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:51.859124  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-535714 minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-535714 minikube.k8s.io/primary=true
	I1002 06:57:51.921378  566681 ops.go:34] apiserver oom_adj: -16
	I1002 06:57:52.030323  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:52.531214  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.031113  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.531050  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.030867  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.531128  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.030521  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.530702  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.030762  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.196068  566681 kubeadm.go:1113] duration metric: took 4.337043927s to wait for elevateKubeSystemPrivileges
	I1002 06:57:56.196100  566681 kubeadm.go:402] duration metric: took 16.3277794s to StartCluster
	I1002 06:57:56.196121  566681 settings.go:142] acquiring lock: {Name:mkde88de9cc28e670cb4891970fce50579712197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.196294  566681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:56.196768  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/kubeconfig: {Name:mkaba69145ae0ebd7ee7f396e649d41ddd82691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.197012  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:57:56.197039  566681 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:56.197157  566681 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:57:56.197305  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.197326  566681 addons.go:69] Setting ingress=true in profile "addons-535714"
	I1002 06:57:56.197323  566681 addons.go:69] Setting default-storageclass=true in profile "addons-535714"
	I1002 06:57:56.197353  566681 addons.go:238] Setting addon ingress=true in "addons-535714"
	I1002 06:57:56.197360  566681 addons.go:69] Setting registry=true in profile "addons-535714"
	I1002 06:57:56.197367  566681 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-535714"
	I1002 06:57:56.197376  566681 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-535714"
	I1002 06:57:56.197382  566681 addons.go:69] Setting volumesnapshots=true in profile "addons-535714"
	I1002 06:57:56.197391  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-535714"
	I1002 06:57:56.197393  566681 addons.go:69] Setting ingress-dns=true in profile "addons-535714"
	I1002 06:57:56.197397  566681 addons.go:238] Setting addon volumesnapshots=true in "addons-535714"
	I1002 06:57:56.197403  566681 addons.go:238] Setting addon ingress-dns=true in "addons-535714"
	I1002 06:57:56.197413  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197417  566681 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:57:56.197432  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197438  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197454  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197317  566681 addons.go:69] Setting gcp-auth=true in profile "addons-535714"
	I1002 06:57:56.197804  566681 addons.go:69] Setting metrics-server=true in profile "addons-535714"
	I1002 06:57:56.197813  566681 mustload.go:65] Loading cluster: addons-535714
	I1002 06:57:56.197822  566681 addons.go:238] Setting addon metrics-server=true in "addons-535714"
	I1002 06:57:56.197849  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197953  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197985  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197348  566681 addons.go:69] Setting cloud-spanner=true in profile "addons-535714"
	I1002 06:57:56.197995  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198002  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.198025  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198027  566681 addons.go:69] Setting inspektor-gadget=true in profile "addons-535714"
	I1002 06:57:56.198034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198040  566681 addons.go:238] Setting addon inspektor-gadget=true in "addons-535714"
	I1002 06:57:56.198051  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198062  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198075  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198080  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198105  566681 addons.go:69] Setting volcano=true in profile "addons-535714"
	I1002 06:57:56.198115  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198118  566681 addons.go:238] Setting addon volcano=true in "addons-535714"
	I1002 06:57:56.198121  566681 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198148  566681 addons.go:69] Setting registry-creds=true in profile "addons-535714"
	I1002 06:57:56.198149  566681 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-535714"
	I1002 06:57:56.198007  566681 addons.go:238] Setting addon cloud-spanner=true in "addons-535714"
	I1002 06:57:56.197369  566681 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198159  566681 addons.go:238] Setting addon registry-creds=true in "addons-535714"
	I1002 06:57:56.197383  566681 addons.go:238] Setting addon registry=true in "addons-535714"
	I1002 06:57:56.198168  566681 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-535714"
	I1002 06:57:56.197305  566681 addons.go:69] Setting yakd=true in profile "addons-535714"
	I1002 06:57:56.198174  566681 addons.go:69] Setting storage-provisioner=true in profile "addons-535714"
	I1002 06:57:56.198182  566681 addons.go:238] Setting addon yakd=true in "addons-535714"
	I1002 06:57:56.198188  566681 addons.go:238] Setting addon storage-provisioner=true in "addons-535714"
	I1002 06:57:56.197356  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-535714"
	I1002 06:57:56.197990  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198337  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198362  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198371  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198392  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198402  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198453  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198563  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198685  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198716  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198796  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198823  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198872  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198882  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198903  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.199225  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199278  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199496  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199602  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199605  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.199635  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.200717  566681 out.go:179] * Verifying Kubernetes components...
	I1002 06:57:56.203661  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:56.205590  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205627  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.205734  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205767  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.207434  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.207479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.210405  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.210443  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.213438  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.213479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.214017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.214056  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.232071  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1002 06:57:56.233110  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.234209  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.234234  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.234937  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.236013  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.236165  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.237450  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I1002 06:57:56.239323  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I1002 06:57:56.239414  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I1002 06:57:56.240034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240196  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240748  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.240776  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.240868  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I1002 06:57:56.240881  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.241379  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.241396  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.241535  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.242519  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.242540  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.242696  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.242735  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.242850  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.243325  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1002 06:57:56.243893  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.243945  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.244617  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.244654  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.245057  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.245890  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.245907  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.246010  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I1002 06:57:56.246033  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I1002 06:57:56.246568  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.247024  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.247099  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.247133  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.247421  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I1002 06:57:56.247710  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.247729  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.248188  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.248445  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.249846  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.250467  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.251029  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251054  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.251579  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251601  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.252078  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.252654  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.252734  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.255593  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.255986  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.256022  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.257178  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.257900  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.257951  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.258275  566681 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-535714"
	I1002 06:57:56.259770  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.259874  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.260317  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.260360  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.260738  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.260770  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.261307  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.261989  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.262034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.263359  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I1002 06:57:56.263562  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I1002 06:57:56.264010  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.264539  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.264559  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.265015  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.265220  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.268199  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38901
	I1002 06:57:56.268835  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.269385  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.269407  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.269800  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.272103  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.272173  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.272820  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.274630  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1002 06:57:56.275810  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I1002 06:57:56.275999  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I1002 06:57:56.276099  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I1002 06:57:56.276317  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I1002 06:57:56.276957  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277804  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.277826  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.277935  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277992  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.279294  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.279318  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.279418  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279522  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I1002 06:57:56.279526  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.280424  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280801  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280956  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:57:56.280961  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281067  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281080  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281248  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281259  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281396  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.280977  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281804  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.281870  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.282274  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282869  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.282901  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282927  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.282975  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.283442  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.284009  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.284202  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.284751  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:56.284768  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:57:56.284787  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.284857  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.284890  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.285017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.285054  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.288207  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.289274  566681 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:57:56.289290  566681 addons.go:238] Setting addon default-storageclass=true in "addons-535714"
	I1002 06:57:56.289364  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.289753  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.289797  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290042  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1002 06:57:56.290151  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.290556  566681 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:56.290578  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.290579  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:57:56.290609  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290771  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290990  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.291089  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I1002 06:57:56.291362  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291376  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.291505  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291516  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.292055  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.293244  566681 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:57:56.294939  566681 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:56.294996  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:57:56.295277  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.296317  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.296363  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.296433  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I1002 06:57:56.297190  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.297368  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.300772  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.300866  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.300946  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.300966  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.300983  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301003  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301026  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.301076  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I1002 06:57:56.301165  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.301203  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.301228  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1002 06:57:56.301400  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.301411  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.301454  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.301467  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.303443  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.303720  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.303466  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.303491  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.303762  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.303770  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.303776  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303526  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.303632  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.304435  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.304932  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.305291  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.305345  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.305464  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.305492  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1002 06:57:56.305495  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.305508  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.305577  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.305592  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	W1002 06:57:56.305630  566681 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 06:57:56.306621  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307189  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.307311  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.307383  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.307409  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307505  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.307540  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.307955  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.307981  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.308071  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.308163  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.308587  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.309033  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309057  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309132  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309293  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.309302  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309314  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309372  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.309533  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309698  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309703  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.309839  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310208  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310523  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.311044  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.311749  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.313557  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.316426  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1002 06:57:56.319293  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I1002 06:57:56.319454  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.319564  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I1002 06:57:56.319675  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I1002 06:57:56.319683  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.319813  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.320386  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.320405  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.320695  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.320492  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.321204  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.321258  566681 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:57:56.321684  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.321443  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I1002 06:57:56.321593  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.321816  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.322144  566681 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:57:56.322156  566681 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:57:56.323037  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.323050  566681 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:56.323066  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:57:56.323087  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.323146  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.323323  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.323337  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.324564  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:56.324583  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:57:56.324603  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.324892  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325026  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.325041  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.325304  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1002 06:57:56.325602  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325730  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.325892  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326132  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326261  566681 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:57:56.327284  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.327472  566681 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:57:56.327597  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:57:56.327623  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.328569  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.328642  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.328661  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329119  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.329383  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.329634  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.329665  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329932  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.330003  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.331010  566681 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:57:56.331650  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332245  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:57:56.332277  566681 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:57:56.332261  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332297  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.332372  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.333369  566681 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:57:56.333621  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.333646  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.333810  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.334276  566681 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:57:56.334843  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.335194  566681 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:56.335210  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:57:56.335228  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.335446  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.335655  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I1002 06:57:56.335851  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.336132  566681 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:57:56.336170  566681 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:57:56.336280  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.336440  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1002 06:57:56.336618  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.337098  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.338250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.338315  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.338584  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.338676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.338709  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.338721  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.339313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.339382  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.339452  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.339507  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.340336  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.340677  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.340657  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341043  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.341796  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.341865  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342040  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342263  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342431  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342440  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342454  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342502  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342595  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342614  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342621  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342695  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.342072  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343379  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343750  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.343817  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.343832  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.344562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.344702  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344753  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.344946  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.345322  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.345404  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345404  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.345548  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.345606  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345806  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.346007  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.346320  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.346590  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.346862  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1002 06:57:56.347602  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.347914  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.348757  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.348800  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.349261  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.349633  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.349706  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.350337  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:57:56.351587  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:57:56.351643  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.351655  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:57:56.352903  566681 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:57:56.352987  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:57:56.353046  566681 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:57:56.353092  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.352987  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:57:56.353974  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I1002 06:57:56.354300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I1002 06:57:56.354530  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:57:56.354545  566681 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:57:56.354562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.354607  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355031  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355314  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355362  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355747  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.355869  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:57:56.355907  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355921  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355982  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.356446  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.356686  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.358485  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:57:56.359466  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.359801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360238  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360272  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.360643  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360654  566681 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:56.360667  566681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:57:56.360676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360684  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.360847  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.360902  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360949  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.361063  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361261  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.361264  566681 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:57:56.361278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.361264  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:57:56.361448  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.361531  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361713  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.362047  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.363668  566681 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:57:56.363670  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:57:56.364768  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365172  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:56.365189  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:57:56.365208  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.365463  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.365492  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365867  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.366200  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.366332  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:57:56.366394  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.366567  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.367647  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:57:56.367669  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:57:56.367689  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.369424  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370073  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.370181  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370353  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.370354  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1002 06:57:56.370539  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.370710  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.370855  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.371120  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.371862  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.371993  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.372440  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.372590  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.372646  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.373687  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.373711  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.373884  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.374060  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.374270  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.374438  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.374887  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.376513  566681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:57:56.377878  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:56.377895  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:57:56.377926  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.381301  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.381862  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.381898  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.382058  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.382245  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.382379  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.382525  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	W1002 06:57:56.611250  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611293  566681 retry.go:31] will retry after 268.923212ms: ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	W1002 06:57:56.611372  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611378  566681 retry.go:31] will retry after 284.79555ms: ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:57.238066  566681 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:57:57.238093  566681 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:57:57.274258  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:57.291447  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:57.296644  566681 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:57.296665  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:57:57.317724  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:57.326760  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:57:57.326790  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:57:57.344388  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:57.359635  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:57:57.359666  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:57:57.391219  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:57.397913  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:57.466213  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:57.539770  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:57:57.539800  566681 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:57:57.565073  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:57:57.565109  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:57:57.626622  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.42956155s)
	I1002 06:57:57.626664  566681 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.422968545s)
	I1002 06:57:57.626751  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:57.626829  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:57:57.788309  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:57:57.788340  566681 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:57:57.863163  566681 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:57.863190  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:57:57.896903  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:57:57.896955  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:57:57.923302  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:58.011690  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:58.012981  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:58.110306  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:57:58.110346  566681 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:57:58.142428  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:57:58.142456  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:57:58.216082  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:57:58.216112  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:57:58.218768  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:58.222643  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.222669  566681 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:57:58.429860  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:57:58.429897  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:57:58.485954  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:57:58.485995  566681 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:57:58.501916  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:57:58.501955  566681 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:57:58.521314  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.818318  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:57:58.818357  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:57:58.833980  566681 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:58.834010  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:57:58.873392  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:58.873431  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:57:59.176797  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:59.186761  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:57:59.186798  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:57:59.305759  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:59.719259  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:57:59.719285  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 06:58:00.188246  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:58:00.188281  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:58:00.481133  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.20682266s)
	I1002 06:58:00.481238  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.481605  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.481667  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.481693  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.481705  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481717  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.482053  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.482070  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.482081  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.644178  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:58:00.644209  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:58:01.086809  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:58:01.086834  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:58:01.452986  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:01.453026  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:58:02.150700  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:02.601667  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.310178549s)
	I1002 06:58:02.601725  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.28395893s)
	I1002 06:58:02.601734  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601747  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601765  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601795  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601869  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.25743101s)
	I1002 06:58:02.601905  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601924  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601917  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.210665802s)
	I1002 06:58:02.601951  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601961  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602046  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602055  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602062  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602178  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602365  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602381  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602379  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602385  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602399  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602401  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602410  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602351  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602416  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602424  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602390  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602460  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602330  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602552  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602560  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602566  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602767  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602847  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.603001  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603018  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602869  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602869  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603276  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:03.763895  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:58:03.763944  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:03.767733  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768302  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:03.768333  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768654  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:03.768868  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:03.769064  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:03.769213  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:04.277228  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:58:04.505226  566681 addons.go:238] Setting addon gcp-auth=true in "addons-535714"
	I1002 06:58:04.505305  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:58:04.505781  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.505848  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.521300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1002 06:58:04.521841  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.522464  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.522494  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.522889  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.523576  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.523636  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.537716  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1002 06:58:04.538258  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.538728  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.538756  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.539153  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.539385  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:58:04.541614  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:58:04.541849  566681 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:58:04.541880  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:04.545872  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546401  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:04.546429  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546708  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:04.546895  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:04.547027  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:04.547194  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:05.770941  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.372950609s)
	I1002 06:58:05.771023  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771039  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771065  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.304816797s)
	I1002 06:58:05.771113  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771131  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771178  566681 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.1443973s)
	I1002 06:58:05.771222  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.144363906s)
	I1002 06:58:05.771258  566681 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 06:58:05.771308  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.847977896s)
	W1002 06:58:05.771333  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771355  566681 retry.go:31] will retry after 297.892327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771456  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758443398s)
	I1002 06:58:05.771481  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771490  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771540  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.759815099s)
	I1002 06:58:05.771573  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771575  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.552784974s)
	I1002 06:58:05.771584  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771595  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771611  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771719  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.250362363s)
	I1002 06:58:05.771747  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771759  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771942  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.771963  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772013  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772022  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772032  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772040  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772044  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772052  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772059  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772194  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772224  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772230  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772248  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772485  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772523  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772532  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772541  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772549  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772589  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772628  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772636  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772645  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772653  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772709  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772796  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773193  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.773210  566681 addons.go:479] Verifying addon registry=true in "addons-535714"
	I1002 06:58:05.773744  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.773810  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773834  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774038  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.774118  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774129  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772818  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772841  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774925  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774937  566681 addons.go:479] Verifying addon ingress=true in "addons-535714"
	I1002 06:58:05.772862  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775004  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775017  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.775024  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772880  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775347  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775380  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775386  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775394  566681 addons.go:479] Verifying addon metrics-server=true in "addons-535714"
	I1002 06:58:05.776348  566681 node_ready.go:35] waiting up to 6m0s for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.776980  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.776996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.776998  566681 out.go:179] * Verifying registry addon...
	I1002 06:58:05.779968  566681 out.go:179] * Verifying ingress addon...
	I1002 06:58:05.780767  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:58:05.782010  566681 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:58:05.829095  566681 node_ready.go:49] node "addons-535714" is "Ready"
	I1002 06:58:05.829146  566681 node_ready.go:38] duration metric: took 52.75602ms for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.829168  566681 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:58:05.829233  566681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:58:05.834443  566681 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:58:05.834466  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:05.835080  566681 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:58:05.835100  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:05.875341  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.875368  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.875751  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.875763  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.875778  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:58:05.875878  566681 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 06:58:05.909868  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.909898  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.910207  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.910270  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.910287  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.069811  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:06.216033  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.039174172s)
	W1002 06:58:06.216104  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216108  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.910297192s)
	I1002 06:58:06.216150  566681 retry.go:31] will retry after 161.340324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216192  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216210  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216504  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216542  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216549  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.216557  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216563  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216800  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216843  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.218514  566681 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-535714 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:58:06.294875  566681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-535714" context rescaled to 1 replicas
	I1002 06:58:06.324438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:06.327459  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.377937  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:58:06.794270  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.798170  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.296006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.297921  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.825812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.825866  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.904551  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.753782282s)
	I1002 06:58:07.904616  566681 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.362740219s)
	I1002 06:58:07.904661  566681 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.075410022s)
	I1002 06:58:07.904685  566681 api_server.go:72] duration metric: took 11.707614799s to wait for apiserver process to appear ...
	I1002 06:58:07.904692  566681 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:58:07.904618  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.904714  566681 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I1002 06:58:07.904746  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905650  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.905668  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.905673  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:07.905682  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.905697  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905988  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.906010  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.906023  566681 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:58:07.917720  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:58:07.917721  566681 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:58:07.919394  566681 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:58:07.920319  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:58:07.920611  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:58:07.920631  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:58:07.923712  566681 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I1002 06:58:07.935689  566681 api_server.go:141] control plane version: v1.34.1
	I1002 06:58:07.935726  566681 api_server.go:131] duration metric: took 31.026039ms to wait for apiserver health ...
	I1002 06:58:07.935739  566681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:58:07.938642  566681 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:58:07.938662  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:07.962863  566681 system_pods.go:59] 20 kube-system pods found
	I1002 06:58:07.962924  566681 system_pods.go:61] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:07.962934  566681 system_pods.go:61] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962944  566681 system_pods.go:61] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962951  566681 system_pods.go:61] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:07.962955  566681 system_pods.go:61] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:07.962959  566681 system_pods.go:61] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending
	I1002 06:58:07.962962  566681 system_pods.go:61] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:07.962965  566681 system_pods.go:61] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:07.962968  566681 system_pods.go:61] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:07.962973  566681 system_pods.go:61] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:07.962979  566681 system_pods.go:61] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:07.962983  566681 system_pods.go:61] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:07.962988  566681 system_pods.go:61] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:07.962994  566681 system_pods.go:61] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:07.962999  566681 system_pods.go:61] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:07.963005  566681 system_pods.go:61] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:07.963017  566681 system_pods.go:61] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:07.963022  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963031  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963036  566681 system_pods.go:61] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:07.963048  566681 system_pods.go:74] duration metric: took 27.298515ms to wait for pod list to return data ...
	I1002 06:58:07.963061  566681 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:58:07.979696  566681 default_sa.go:45] found service account: "default"
	I1002 06:58:07.979723  566681 default_sa.go:55] duration metric: took 16.655591ms for default service account to be created ...
	I1002 06:58:07.979733  566681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:58:08.050371  566681 system_pods.go:86] 20 kube-system pods found
	I1002 06:58:08.050407  566681 system_pods.go:89] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:08.050415  566681 system_pods.go:89] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050424  566681 system_pods.go:89] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050430  566681 system_pods.go:89] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:08.050438  566681 system_pods.go:89] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:08.050443  566681 system_pods.go:89] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:58:08.050449  566681 system_pods.go:89] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:08.050456  566681 system_pods.go:89] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:08.050463  566681 system_pods.go:89] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:08.050472  566681 system_pods.go:89] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:08.050477  566681 system_pods.go:89] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:08.050485  566681 system_pods.go:89] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:08.050493  566681 system_pods.go:89] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:08.050504  566681 system_pods.go:89] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:08.050512  566681 system_pods.go:89] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:08.050523  566681 system_pods.go:89] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:08.050528  566681 system_pods.go:89] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:08.050537  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050542  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050551  566681 system_pods.go:89] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:08.050567  566681 system_pods.go:126] duration metric: took 70.827007ms to wait for k8s-apps to be running ...
	I1002 06:58:08.050583  566681 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:58:08.050638  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:08.169874  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:58:08.169907  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:58:08.289577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.292025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.296361  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.296391  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:58:08.432642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:08.459596  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.795545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.796983  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.947651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.295174  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.296291  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.426575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.794891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.794937  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.929559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.288382  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.293181  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.428326  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.511821  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.441960114s)
	W1002 06:58:10.511871  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511903  566681 retry.go:31] will retry after 394.105371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511999  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.133998235s)
	I1002 06:58:10.512065  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512084  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512009  566681 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.461351775s)
	I1002 06:58:10.512151  566681 system_svc.go:56] duration metric: took 2.461548607s WaitForService to wait for kubelet
	I1002 06:58:10.512170  566681 kubeadm.go:586] duration metric: took 14.315097833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:58:10.512195  566681 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:58:10.512421  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512436  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.512445  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512451  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512808  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512831  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.525421  566681 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 06:58:10.525467  566681 node_conditions.go:123] node cpu capacity is 2
	I1002 06:58:10.525483  566681 node_conditions.go:105] duration metric: took 13.282233ms to run NodePressure ...
	I1002 06:58:10.525500  566681 start.go:241] waiting for startup goroutines ...
	I1002 06:58:10.876948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.878962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.907099  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:10.933831  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.474178987s)
	I1002 06:58:10.933902  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.933917  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934327  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934351  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.934363  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.934372  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934718  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934741  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.936073  566681 addons.go:479] Verifying addon gcp-auth=true in "addons-535714"
	I1002 06:58:10.939294  566681 out.go:179] * Verifying gcp-auth addon...
	I1002 06:58:10.941498  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:58:10.967193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.967643  566681 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:58:10.967661  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.291995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.292859  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.426822  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.449596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.787220  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.790007  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.927177  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.946352  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.291330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.291893  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.412988  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.505843996s)
	W1002 06:58:12.413060  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.413088  566681 retry.go:31] will retry after 830.72209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.425033  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.449434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.790923  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.792837  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.929132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.949344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.244514  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:13.289311  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.291334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.429008  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.453075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.786448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.787372  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.926128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.944808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.290787  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.291973  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.426597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.446124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.495404  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.250841467s)
	W1002 06:58:14.495476  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.495515  566681 retry.go:31] will retry after 993.52867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.787133  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.787363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.925480  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.947120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.288745  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.290247  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.426491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.446707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.489998  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:15.790203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.790718  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.926338  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.947762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.288050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.294216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.426315  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.448623  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.749674  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.259622296s)
	W1002 06:58:16.749739  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.749766  566681 retry.go:31] will retry after 685.893269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.784937  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.789418  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.924303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.945254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.286582  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.289258  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.429493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.436551  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:17.446130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.789304  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.789354  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.927192  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.947272  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.287684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.287964  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.425334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.446542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.793984  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.357370737s)
	W1002 06:58:18.794035  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.794058  566681 retry.go:31] will retry after 1.769505645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.818834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.819319  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.926250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.946166  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.286120  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.287299  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.427368  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.446296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.788860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.790575  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.926266  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.946838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.285631  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.286287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.426458  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.448700  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.563743  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:20.784983  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.792452  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.928439  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.946213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.354534  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.355101  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.424438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.447780  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.787792  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.788239  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.926313  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.946909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.986148  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.422343909s)
	W1002 06:58:21.986215  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:21.986241  566681 retry.go:31] will retry after 1.591159568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:22.479105  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.490010  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.490062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:22.490154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.785438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.785505  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.924097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.945260  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.287691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.288324  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.424675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.444770  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.578011  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:23.942123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.948294  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.948453  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.950791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.287641  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.287755  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.427062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.445753  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.646106  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.068053257s)
	W1002 06:58:24.646165  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.646192  566681 retry.go:31] will retry after 2.605552754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.785021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.786706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.924880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.945307  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.293097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.295253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.426401  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.448785  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.786965  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.789832  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.926383  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.947419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.286815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.424942  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.444763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.788540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.788706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.945896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.252378  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:27.285347  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.286330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.426765  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.444675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.783930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.785939  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.925152  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.946794  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:27.992201  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:27.992240  566681 retry.go:31] will retry after 8.383284602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:28.292474  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.293236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.449878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:28.785825  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.786277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.930557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.944934  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.288741  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.289425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.425596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.448825  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.791293  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.791772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.925493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.947040  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.289093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.289274  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.429043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.445086  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.787343  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.788106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.925916  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.945578  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.287772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.288130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.424173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.444911  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.839251  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.839613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.924537  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.945244  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.285593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.287197  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.428173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.445646  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.790722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.792545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.924044  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.948465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.287477  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.287815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.426173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.445002  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.789091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.789248  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.926672  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.945340  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.287879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.291550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.424476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.446160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.790769  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.793072  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.924896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.945667  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.523723  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.524500  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.524737  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.525162  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.790230  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.791831  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.924241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.944951  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.289627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.289977  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.375684  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:36.425592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.451074  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.785903  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.787679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.925288  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.947999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.311635  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.311959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.426029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.446091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.636801  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.261070571s)
	W1002 06:58:37.636852  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.636877  566681 retry.go:31] will retry after 12.088306464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.784365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.786077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.924729  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.287422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.288052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.424776  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.446043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.787364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.788336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.929977  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.952669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.285777  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.286130  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.425664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.445359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.791043  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.792332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.927261  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.949133  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.297847  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.298155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.508411  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.508530  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.790869  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.791640  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.926541  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.946409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.284335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.288282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.425342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.445476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.786456  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.787369  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.925788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.285122  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.289954  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.427812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.448669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.789086  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.793784  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.981476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.983793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.287301  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.287653  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.425089  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.446115  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.788762  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.788804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.925841  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.946154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.291446  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.291561  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.424642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.445497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.784807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.785666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.924223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.945793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.287330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.288804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.425720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.445387  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.784761  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.787219  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.925198  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.945101  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.287324  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.287453  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.425817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.785000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.786016  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.924786  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.944720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.284615  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.286350  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.424772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.444696  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.784801  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.786247  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.924675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.945863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.285254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.286071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.424850  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.444546  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.784736  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.787062  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.924609  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.285611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.286827  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.424821  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.444716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.726164  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:49.787775  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.787812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.924332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.945915  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:50.285693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.287323  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.425093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:50.457717  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.457755  566681 retry.go:31] will retry after 14.401076568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.785374  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.786592  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.924494  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.946113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.285309  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.424519  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.446358  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.785764  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.787620  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.924671  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.945518  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.284608  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.286328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.426252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.444955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.785415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.786501  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.924360  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.945603  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.286059  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.286081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.426061  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.445434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.784563  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.787018  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.926712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.285670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.286270  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.425263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.445015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.783971  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.785518  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.924652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.944701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.284095  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.285982  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.425045  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.445159  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.784789  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.785811  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.925024  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.945670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.284935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.286230  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.424865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.444979  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.784010  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.785095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.925082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.945267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.285037  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.290841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.423992  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.444492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.785708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.786647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.944543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.284397  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.286589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.424263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.446278  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.784592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.786223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.925275  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.945639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.284167  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.286213  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.424554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.446331  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.786351  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.786532  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.924799  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.944552  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.284593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.286147  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.427708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.446640  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.783993  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.786195  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.925109  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.945645  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.284268  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.286567  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.425880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.444926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.784751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.786669  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.924082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.945409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.285484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.287955  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.424588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.445328  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.785933  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.786611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.924311  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.945554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.284664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.286758  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.424558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.445443  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.785718  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.786015  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.924950  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.945320  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.285692  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.287456  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.423909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.445028  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.784417  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.785847  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.859977  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:04.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.944867  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:05.286410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.424815  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.444725  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:59:05.565727  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.565775  566681 retry.go:31] will retry after 12.962063584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.784083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.785399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.924301  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.945548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.284341  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.285025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.424577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.445930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.785592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.785777  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.924651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.944548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.284807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.286980  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.424593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.445604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.785681  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.786565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.924412  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.945298  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.284890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.285768  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.424422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.446875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.784632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.786747  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.924452  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.284701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.286699  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.424832  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.445005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.785080  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.923720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.944468  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.285848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.285877  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.425574  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.445229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.785800  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.788069  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.924958  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.945132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.284817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.286986  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.424693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.444335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.786755  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.788412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.924402  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.944935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.285499  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.285734  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.424709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.445959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.785549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.788041  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.924691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.944292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.285683  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.424754  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.445585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.784745  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.786053  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.925403  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.945860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.285184  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.286959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.424804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.446097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.791558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.791556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.927542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.949956  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.284639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.286617  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.426580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.446175  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.784496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.787071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.925830  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.945618  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.286160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:16.287392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.424973  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.446497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.789545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.790116  566681 kapi.go:107] duration metric: took 1m11.009348953s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 06:59:16.925187  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.947267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.287647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.426165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:17.450844  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.786988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.928406  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.027597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.293020  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.429378  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.449227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.528488  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:18.796448  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.929553  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.946292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.288404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.429199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.452666  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.792639  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.864991  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.336449949s)
	W1002 06:59:19.865069  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:19.865160  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865179  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865566  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:59:19.865575  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:59:19.865834  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:59:19.865969  566681 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:59:19.924481  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.945058  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.286730  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.424767  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.445496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.787056  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.925303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.945594  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.285610  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.424114  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.445438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.786589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.924253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.944783  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.285375  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.424724  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.445811  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.828328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.929492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.945629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.286455  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.424116  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.444871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.785953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.924350  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.945321  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.286907  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.424613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.445706  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.786265  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.925165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.944432  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.286899  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.424337  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.445373  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.786646  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.924121  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.944695  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.286707  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.425250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.445323  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.786287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.945489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.286403  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.424957  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.445376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.786820  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.924170  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.945197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.424241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.445542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.786419  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.945740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.286366  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.425536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.445687  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.788123  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.944760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.285395  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.425015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.445071  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.786362  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.925693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.945540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.286268  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.424296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.446123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.786155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.926684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.945375  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.286413  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.424180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.444838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.786253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.925151  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.944944  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.288748  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.425620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.445650  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.786358  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.924738  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.944757  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.285092  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.424998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.445067  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.786516  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.924306  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.945543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.286428  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.423533  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.445039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.785517  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.944555  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.286468  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.424778  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.444808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.785451  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.924018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.287660  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.424005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.445419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.785743  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.924870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.944575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.286370  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.424689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.444639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.786644  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.928760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.945529  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.286055  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.425011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.787058  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.924829  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.944865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.285681  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.424212  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.445570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.786536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.924039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.945611  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.286872  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.425081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.445160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.785854  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.924803  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.945395  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.286806  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.424531  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.445213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.785794  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.924199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.946416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.287223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.425005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.445179  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.786152  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.945545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.286313  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.425004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.445925  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.786682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.944902  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.286167  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.424932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.444879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.785378  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.925864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.945123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.286422  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.424954  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.445018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.786489  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.924425  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.945064  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.286244  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.425181  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.445110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.785417  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.923870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.944712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.287782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.424751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.444542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.786556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.924410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.945514  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.286856  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.424634  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.444823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.786341  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.925249  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.945585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.287532  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.427364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.449565  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.926679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.947416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.289682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.428232  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.445465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.787537  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.926415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.945253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.285757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.424433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.448251  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.785971  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.928422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.946461  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.286536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.452271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.786128  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.926032  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.946426  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.287601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.424345  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.445705  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.787096  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.924759  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.946688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.290180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.519704  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.519891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.926689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.946557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.286054  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.425914  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.447300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.785957  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.924030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.949871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.291565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.428120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.526092  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.786283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.933203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.952823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.290757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:58.425788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:58.445898  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.785286  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.135410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.135484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.289658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.424763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.444901  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.789990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.927768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.950570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.288666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.424489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.444995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.785712  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.928193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.945797  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.289874  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.429342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.447102  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.787399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.924633  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.944955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.288296  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.432709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.448119  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.788304  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.936551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.950283  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.291180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.429826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.446896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.789649  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.930297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.285728  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.423878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.445021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.785989  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.926604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.946365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.289629  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.424560  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.446580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.786184  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.925038  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.286414  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.425072  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.445415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.786235  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.924932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.945108  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.286318  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.425639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.445791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.787192  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.945680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.286388  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.424699  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.445180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.786177  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.927180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.945006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.285412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.424690  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.444685  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.787988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.926782  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.944680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.286385  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.425422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.445890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.785391  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.925292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.946110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.286953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.424926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.445097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.785990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.925536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.945882  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.286095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.426218  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.445400  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.787180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.924959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.945605  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.424843  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.445297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.786034  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.927087  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.945676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.286216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.424888  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.444768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.785283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.925300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.945536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.287658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.424359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.445282  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.785834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.924384  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.945604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.286392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.424670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.445327  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.786482  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.924913  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.944676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.286962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.428554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.445872  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.787125  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.924730  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.945508  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.286528  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.426864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.786434  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.926688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.945265  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.286255  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.425491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.446113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.925826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.946549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.286336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.424707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.444772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.785404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.925678  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.945252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.425487  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.447029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.786550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.945389  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.288156  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.425586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.446602  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.787696  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.924004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.286521  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.424493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.446224  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.786604  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.925118  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.945482  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.424632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.445848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.785791  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.927001  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.944907  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.288049  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.424875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.444559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.786767  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.925226  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.945050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.285958  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.426083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.444740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.787052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.945062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.285717  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.424050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.444966  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.787841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.924740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.945492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.286484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.424236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.445504  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.786601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.924551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.945948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.288423  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.424871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.445286  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.786695  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.926223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.945407  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.286021  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.425588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.445469  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.786883  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.926085  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.945814  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.287360  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.424981  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.445361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.787680  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.924556  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.945363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.288077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.425366  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.447433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.847272  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.946629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.946982  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.285658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.424106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.445538  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.787044  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.927886  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.944580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.290469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.425444  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.448620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.789282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.930009  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.948721  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.287469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.432852  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.446652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.788507  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.930180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.954772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.293484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.435262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.449271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.788843  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.928945  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.288443  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.427657  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.447716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.787995  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.933694  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.946106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.287636  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.427229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.446000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.788221  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.925863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.944669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:39.286808  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:39.425719  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:39.446011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.005533  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.011858  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.013227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.289216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.429330  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.446597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.788887  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.934361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.949590  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.288436  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.426586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.446712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.790082  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.926762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.948030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.286904  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.428171  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.447262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.787879  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.928999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.947900  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.289340  566681 kapi.go:107] duration metric: took 2m37.507327929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 07:00:43.426593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.445627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.927030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.946124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.426277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.445511  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.928128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.945892  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.424940  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.445245  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.925479  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.948084  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:46.427998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:46.446348  566681 kapi.go:107] duration metric: took 2m35.504841728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 07:00:46.448361  566681 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-535714 cluster.
	I1002 07:00:46.449772  566681 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 07:00:46.451121  566681 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 07:00:46.925947  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.429007  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.927793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.430587  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.930344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.428197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.928448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:50.425299  566681 kapi.go:107] duration metric: took 2m42.504972928s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 07:00:50.428467  566681 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, metrics-server, storage-provisioner, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 07:00:50.429978  566681 addons.go:514] duration metric: took 2m54.232824958s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin amd-gpu-device-plugin registry-creds metrics-server storage-provisioner storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 07:00:50.430050  566681 start.go:246] waiting for cluster config update ...
	I1002 07:00:50.430076  566681 start.go:255] writing updated cluster config ...
	I1002 07:00:50.430525  566681 ssh_runner.go:195] Run: rm -f paused
	I1002 07:00:50.439887  566681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:50.446240  566681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.451545  566681 pod_ready.go:94] pod "coredns-66bc5c9577-w7hjm" is "Ready"
	I1002 07:00:50.451589  566681 pod_ready.go:86] duration metric: took 5.295665ms for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.454257  566681 pod_ready.go:83] waiting for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.459251  566681 pod_ready.go:94] pod "etcd-addons-535714" is "Ready"
	I1002 07:00:50.459291  566681 pod_ready.go:86] duration metric: took 4.998226ms for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.463385  566681 pod_ready.go:83] waiting for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.473863  566681 pod_ready.go:94] pod "kube-apiserver-addons-535714" is "Ready"
	I1002 07:00:50.473899  566681 pod_ready.go:86] duration metric: took 10.481477ms for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.478391  566681 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.845519  566681 pod_ready.go:94] pod "kube-controller-manager-addons-535714" is "Ready"
	I1002 07:00:50.845556  566681 pod_ready.go:86] duration metric: took 367.127625ms for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.046035  566681 pod_ready.go:83] waiting for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.445054  566681 pod_ready.go:94] pod "kube-proxy-z495t" is "Ready"
	I1002 07:00:51.445095  566681 pod_ready.go:86] duration metric: took 399.024039ms for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.644949  566681 pod_ready.go:83] waiting for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045721  566681 pod_ready.go:94] pod "kube-scheduler-addons-535714" is "Ready"
	I1002 07:00:52.045756  566681 pod_ready.go:86] duration metric: took 400.769133ms for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045769  566681 pod_ready.go:40] duration metric: took 1.605821704s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:52.107681  566681 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 07:00:52.109482  566681 out.go:179] * Done! kubectl is now configured to use "addons-535714" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.280307814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388854280280800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf145fcc-825d-4160-bce5-dfdebb3616a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.281034117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f36d5b7-0776-486e-a61c-c972190e77b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.281332314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f36d5b7-0776-486e-a61c-c972190e77b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.281848365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]strin
g{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec
{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd
347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272
c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f36d5b7-0776-486e-a61c-c972190e77b2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.325610613Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4495a53-f39a-4a72-81a9-3c9805f48983 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.326169500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4495a53-f39a-4a72-81a9-3c9805f48983 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.327918716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55bde8b0-5da1-4741-aadf-c7835e272be0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.329273743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388854329249735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55bde8b0-5da1-4741-aadf-c7835e272be0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.329872451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ddfd07b-c681-43bd-bcce-93ea8eafaf88 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.329946366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ddfd07b-c681-43bd-bcce-93ea8eafaf88 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.330529201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]strin
g{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec
{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd
347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272
c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ddfd07b-c681-43bd-bcce-93ea8eafaf88 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.369013504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46f23141-1569-46b7-b361-a565c3029d47 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.369157123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46f23141-1569-46b7-b361-a565c3029d47 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.370235135Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7404f4d9-293f-4db0-b5df-65307c7e5055 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.372918340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388854372888646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7404f4d9-293f-4db0-b5df-65307c7e5055 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.373586335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a670993e-5215-480e-9c64-bb2a7835d95a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.373952701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a670993e-5215-480e-9c64-bb2a7835d95a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.374763691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]strin
g{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec
{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd
347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272
c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a670993e-5215-480e-9c64-bb2a7835d95a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.415109967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02234377-2f58-482e-a378-8c84e0c2e769 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.415183290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02234377-2f58-482e-a378-8c84e0c2e769 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.416573308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52bcf77e-49fd-46d5-b812-609eb195a273 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.418021177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388854417995148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52bcf77e-49fd-46d5-b812-609eb195a273 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.418662142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e689aa1-9e62-45f1-a47d-63c186b1f823 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.418748775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e689aa1-9e62-45f1-a47d-63c186b1f823 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:07:34 addons-535714 crio[827]: time="2025-10-02 07:07:34.419852793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]strin
g{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec
{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd
347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272
c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e689aa1-9e62-45f1-a47d-63c186b1f823 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	86667c9385b67       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   2fe38d26ed81e       busybox
	6e1593fcd2d1f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f4c3a8c0ea5c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	65d9fdba36a17       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   e2277305f110b       csi-hostpathplugin-8sjk8
	81f190fa89d8e       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             6 minutes ago       Running             controller                               0                   2c0a4b75d16bb       ingress-nginx-controller-9cc49f96f-jcwrw
	0683a8b55d03d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3456f5ab4e9db       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f6808e1f9304       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   dabf0b0e1eb70       csi-hostpath-resizer-0
	24139e6a7a8b1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   e2ed9baa384a5       csi-hostpath-attacher-0
	46de36d65127e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   e2277305f110b       csi-hostpathplugin-8sjk8
	98d5407fe4705       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   e2ad15837b991       snapshot-controller-7d9fbc56b8-g4hd4
	ea44a6e53635f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   bbec6993c46f7       snapshot-controller-7d9fbc56b8-knwl8
	2f84e33ebf14f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago       Exited              patch                                    0                   45c7f94d02bfb       ingress-nginx-admission-patch-46z2n
	5ce0b3e6c8fef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   7 minutes ago       Exited              create                                   0                   13a0722f22fb7       ingress-nginx-admission-create-jsw7z
	d20e001ce5fa7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            7 minutes ago       Running             gadget                                   0                   53cbb87b563ff       gadget-2hn79
	b1d2fad243c3b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             8 minutes ago       Running             local-path-provisioner                   0                   99eafaf0bf06b       local-path-provisioner-648f6765c9-gf62q
	c68a602009da4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               8 minutes ago       Running             minikube-ingress-dns                     0                   1239599eb3508       kube-ingress-dns-minikube
	0f29426982799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             9 minutes ago       Running             storage-provisioner                      0                   348af25e84579       storage-provisioner
	58aa192645e96       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     9 minutes ago       Running             amd-gpu-device-plugin                    0                   dba3c49629455       amd-gpu-device-plugin-f7qcs
	6e31cb36c4500       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             9 minutes ago       Running             coredns                                  0                   4fcabfc373e60       coredns-66bc5c9577-w7hjm
	fb130499febb3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             9 minutes ago       Running             kube-proxy                               0                   646600c8d86f7       kube-proxy-z495t
	466837c8cdfcc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             9 minutes ago       Running             etcd                                     0                   c7d4e0eb984a2       etcd-addons-535714
	da8295539fc0e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             9 minutes ago       Running             kube-scheduler                           0                   36d2846a22a84       kube-scheduler-addons-535714
	da58df3cad660       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             9 minutes ago       Running             kube-controller-manager                  0                   63f4cb9d3437a       kube-controller-manager-addons-535714
	deaf436584a26       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             9 minutes ago       Running             kube-apiserver                           0                   35f49d5f3b8fb       kube-apiserver-addons-535714
	
	
	==> coredns [6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb] <==
	[INFO] 10.244.0.7:35110 - 11487 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000105891s
	[INFO] 10.244.0.7:35110 - 31639 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000100284s
	[INFO] 10.244.0.7:35110 - 25746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080168s
	[INFO] 10.244.0.7:35110 - 43819 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000100728s
	[INFO] 10.244.0.7:35110 - 63816 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124028s
	[INFO] 10.244.0.7:35110 - 35022 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000129164s
	[INFO] 10.244.0.7:35110 - 28119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001725128s
	[INFO] 10.244.0.7:50584 - 36630 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148556s
	[INFO] 10.244.0.7:50584 - 36962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067971s
	[INFO] 10.244.0.7:37190 - 758 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052949s
	[INFO] 10.244.0.7:37190 - 1043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051809s
	[INFO] 10.244.0.7:37461 - 4143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057036s
	[INFO] 10.244.0.7:37461 - 4397 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049832s
	[INFO] 10.244.0.7:36180 - 39849 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111086s
	[INFO] 10.244.0.7:36180 - 40050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069757s
	[INFO] 10.244.0.23:54237 - 52266 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001020809s
	[INFO] 10.244.0.23:46188 - 47837 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000755825s
	[INFO] 10.244.0.23:50620 - 40298 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145474s
	[INFO] 10.244.0.23:46344 - 40921 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123896s
	[INFO] 10.244.0.23:50353 - 65439 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000272665s
	[INFO] 10.244.0.23:50633 - 23346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143762s
	[INFO] 10.244.0.23:52616 - 28857 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002777615s
	[INFO] 10.244.0.23:55533 - 44086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003112269s
	[INFO] 10.244.0.27:55844 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000811242s
	[INFO] 10.244.0.27:51921 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000498985s
	
	
	==> describe nodes <==
	Name:               addons-535714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-535714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-535714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-535714
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-535714"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:57:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-535714
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:07:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:05:20 +0000   Thu, 02 Oct 2025 06:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    addons-535714
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 26ed18e3cae343e2ba2a85be4a0a7371
	  System UUID:                26ed18e3-cae3-43e2-ba2a-85be4a0a7371
	  Boot ID:                    73babc46-f812-4e67-b425-db513a204e97
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  gadget                      gadget-2hn79                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jcwrw                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         9m29s
	  kube-system                 amd-gpu-device-plugin-f7qcs                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kube-system                 coredns-66bc5c9577-w7hjm                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m38s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kube-system                 csi-hostpathplugin-8sjk8                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kube-system                 etcd-addons-535714                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m45s
	  kube-system                 kube-apiserver-addons-535714                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 kube-controller-manager-addons-535714                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kube-system                 kube-proxy-z495t                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 kube-scheduler-addons-535714                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 snapshot-controller-7d9fbc56b8-g4hd4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-knwl8                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  local-path-storage          helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  local-path-storage          local-path-provisioner-648f6765c9-gf62q                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m37s  kube-proxy       
	  Normal  Starting                 9m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m43s  kubelet          Node addons-535714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m43s  kubelet          Node addons-535714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m43s  kubelet          Node addons-535714 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m42s  kubelet          Node addons-535714 status is now: NodeReady
	  Normal  RegisteredNode           9m40s  node-controller  Node addons-535714 event: Registered Node addons-535714 in Controller
	
	
	==> dmesg <==
	[  +8.319606] kauditd_printk_skb: 17 callbacks suppressed
	[Oct 2 06:59] kauditd_printk_skb: 20 callbacks suppressed
	[ +33.860109] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.779557] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 2 07:00] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.976810] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.109220] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.510995] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.560914] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.223140] kauditd_printk_skb: 56 callbacks suppressed
	[Oct 2 07:01] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.884695] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.185211] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.060908] kauditd_printk_skb: 58 callbacks suppressed
	[Oct 2 07:02] kauditd_printk_skb: 10 callbacks suppressed
	[  +1.331616] kauditd_printk_skb: 17 callbacks suppressed
	[  +2.250929] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.000032] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 2 07:03] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.099939] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.783953] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 2 07:06] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.000320] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2] <==
	{"level":"info","ts":"2025-10-02T06:59:59.120512Z","caller":"traceutil/trace.go:172","msg":"trace[1384240821] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"205.082518ms","start":"2025-10-02T06:59:58.915407Z","end":"2025-10-02T06:59:59.120489Z","steps":["trace[1384240821] 'read index received'  (duration: 205.072637ms)","trace[1384240821] 'applied index is now lower than readState.Index'  (duration: 8.699µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T06:59:59.121116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.148075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-46z2n\" limit:1 ","response":"range_response_count:1 size:4635"}
	{"level":"info","ts":"2025-10-02T06:59:59.121160Z","caller":"traceutil/trace.go:172","msg":"trace[787006594] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-46z2n; range_end:; response_count:1; response_revision:1085; }","duration":"198.245202ms","start":"2025-10-02T06:59:58.922907Z","end":"2025-10-02T06:59:59.121152Z","steps":["trace[787006594] 'agreement among raft nodes before linearized reading'  (duration: 198.083065ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T06:59:59.121300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.835357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:59.121339Z","caller":"traceutil/trace.go:172","msg":"trace[1316712396] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"183.87568ms","start":"2025-10-02T06:59:58.937457Z","end":"2025-10-02T06:59:59.121332Z","steps":["trace[1316712396] 'agreement among raft nodes before linearized reading'  (duration: 183.815946ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:32.832647Z","caller":"traceutil/trace.go:172","msg":"trace[1453851995] linearizableReadLoop","detail":"{readStateIndex:1231; appliedIndex:1231; }","duration":"220.066962ms","start":"2025-10-02T07:00:32.612509Z","end":"2025-10-02T07:00:32.832576Z","steps":["trace[1453851995] 'read index received'  (duration: 220.05963ms)","trace[1453851995] 'applied index is now lower than readState.Index'  (duration: 6.189µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T07:00:32.832730Z","caller":"traceutil/trace.go:172","msg":"trace[302351669] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"243.94686ms","start":"2025-10-02T07:00:32.588772Z","end":"2025-10-02T07:00:32.832719Z","steps":["trace[302351669] 'process raft request'  (duration: 243.833114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:32.832967Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.479862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-02T07:00:32.833001Z","caller":"traceutil/trace.go:172","msg":"trace[1089606970] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1180; }","duration":"220.525584ms","start":"2025-10-02T07:00:32.612469Z","end":"2025-10-02T07:00:32.832995Z","steps":["trace[1089606970] 'agreement among raft nodes before linearized reading'  (duration: 220.422716ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.990824Z","caller":"traceutil/trace.go:172","msg":"trace[1822440841] linearizableReadLoop","detail":"{readStateIndex:1259; appliedIndex:1259; }","duration":"216.288139ms","start":"2025-10-02T07:00:39.774473Z","end":"2025-10-02T07:00:39.990762Z","steps":["trace[1822440841] 'read index received'  (duration: 216.279919ms)","trace[1822440841] 'applied index is now lower than readState.Index'  (duration: 6.642µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:00:39.991358Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.077704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:00:39.991456Z","caller":"traceutil/trace.go:172","msg":"trace[1082597067] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1206; }","duration":"217.190679ms","start":"2025-10-02T07:00:39.774258Z","end":"2025-10-02T07:00:39.991449Z","steps":["trace[1082597067] 'agreement among raft nodes before linearized reading'  (duration: 216.738402ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.992313Z","caller":"traceutil/trace.go:172","msg":"trace[515400758] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"337.963385ms","start":"2025-10-02T07:00:39.654341Z","end":"2025-10-02T07:00:39.992305Z","steps":["trace[515400758] 'process raft request'  (duration: 337.312964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:39.992477Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.654280Z","time spent":"338.099015ms","remote":"127.0.0.1:56776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1205 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-02T07:00:39.994757Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.655974Z","time spent":"338.780211ms","remote":"127.0.0.1:56512","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-10-02T07:02:18.249354Z","caller":"traceutil/trace.go:172","msg":"trace[1937839981] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"110.209012ms","start":"2025-10-02T07:02:18.139042Z","end":"2025-10-02T07:02:18.249251Z","steps":["trace[1937839981] 'process raft request'  (duration: 107.760601ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:02:25.358154Z","caller":"traceutil/trace.go:172","msg":"trace[1514029901] linearizableReadLoop","detail":"{readStateIndex:1683; appliedIndex:1683; }","duration":"269.707219ms","start":"2025-10-02T07:02:25.088427Z","end":"2025-10-02T07:02:25.358135Z","steps":["trace[1514029901] 'read index received'  (duration: 269.698824ms)","trace[1514029901] 'applied index is now lower than readState.Index'  (duration: 7.137µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:02:25.358835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.337456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.358908Z","caller":"traceutil/trace.go:172","msg":"trace[129833481] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1605; }","duration":"270.47424ms","start":"2025-10-02T07:02:25.088423Z","end":"2025-10-02T07:02:25.358898Z","steps":["trace[129833481] 'agreement among raft nodes before linearized reading'  (duration: 270.303097ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.361904Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.156634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.361957Z","caller":"traceutil/trace.go:172","msg":"trace[228810763] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:1605; }","duration":"257.224721ms","start":"2025-10-02T07:02:25.104724Z","end":"2025-10-02T07:02:25.361949Z","steps":["trace[228810763] 'agreement among raft nodes before linearized reading'  (duration: 257.141662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.13527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363670Z","caller":"traceutil/trace.go:172","msg":"trace[2116337020] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1606; }","duration":"129.197912ms","start":"2025-10-02T07:02:25.234464Z","end":"2025-10-02T07:02:25.363662Z","steps":["trace[2116337020] 'agreement among raft nodes before linearized reading'  (duration: 129.113844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.575698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363939Z","caller":"traceutil/trace.go:172","msg":"trace[2132272707] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1606; }","duration":"192.616449ms","start":"2025-10-02T07:02:25.171317Z","end":"2025-10-02T07:02:25.363933Z","steps":["trace[2132272707] 'agreement among raft nodes before linearized reading'  (duration: 192.563634ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:07:34 up 10 min,  0 users,  load average: 0.34, 0.83, 0.69
	Linux addons-535714 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68] <==
	W1002 06:59:04.261853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:04.262015       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 06:59:04.262027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:59:04.261865       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:04.262054       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 06:59:04.263426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:59:19.669740       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:19.669928       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 06:59:19.671457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.672416       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.677780       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.698801       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	I1002 06:59:19.813028       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 07:01:02.988144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:59036: use of closed network connection
	E1002 07:01:03.204248       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:59068: use of closed network connection
	I1002 07:01:12.103579       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 07:01:12.401820       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.38.17"}
	I1002 07:01:12.978874       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.6.38"}
	I1002 07:01:20.686056       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20] <==
	I1002 06:57:54.853402       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 06:57:54.853436       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 06:57:54.854794       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 06:57:54.854865       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:57:54.855046       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 06:57:54.858148       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 06:57:54.858221       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 06:57:54.858258       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 06:57:54.858263       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 06:57:54.858268       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 06:57:54.860904       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:57:54.863351       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 06:57:54.869106       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-535714" podCIDRs=["10.244.0.0/24"]
	E1002 06:58:03.439760       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:58:24.819245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:24.819664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 06:58:24.819801       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 06:58:24.847762       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 06:58:24.855798       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 06:58:24.921306       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:58:24.957046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:58:54.928427       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:54.966681       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 07:01:16.701698       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1002 07:02:37.947143       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b] <==
	I1002 06:57:56.940558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:57:57.042011       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:57:57.042117       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.164"]
	E1002 06:57:57.042205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:57:57.167383       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 06:57:57.167427       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 06:57:57.167460       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:57:57.190949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:57:57.192886       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:57:57.192902       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:57:57.294325       1 config.go:200] "Starting service config controller"
	I1002 06:57:57.294358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:57:57.294429       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:57:57.294434       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:57:57.294455       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:57:57.294459       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:57:57.438397       1 config.go:309] "Starting node config controller"
	I1002 06:57:57.441950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:57:57.479963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:57:57.494463       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:57:57.494530       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:57:57.494543       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca] <==
	E1002 06:57:47.853654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:57:47.853709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:47.853767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:57:47.853824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:57:47.854040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:47.855481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:47.854491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.707149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:57:48.761606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:57:48.783806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:57:48.817274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:57:48.856898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:48.856969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 06:57:48.860214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:57:48.880906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.896863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:48.913429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:57:48.964287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:57:48.985241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:57:49.005874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:57:49.118344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:57:49.123456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:49.157781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:57:49.202768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 06:57:51.042340       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:06:24 addons-535714 kubelet[1509]: I1002 07:06:24.173393    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-f7qcs" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:06:31 addons-535714 kubelet[1509]: E1002 07:06:31.746016    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388791745202419  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:06:31 addons-535714 kubelet[1509]: E1002 07:06:31.746147    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388791745202419  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:06:41 addons-535714 kubelet[1509]: E1002 07:06:41.749893    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388801749349789  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:06:41 addons-535714 kubelet[1509]: E1002 07:06:41.749918    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388801749349789  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:06:42 addons-535714 kubelet[1509]: E1002 07:06:42.085021    1509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 07:06:42 addons-535714 kubelet[1509]: E1002 07:06:42.085130    1509 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 07:06:42 addons-535714 kubelet[1509]: E1002 07:06:42.085428    1509 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(2f677461-445c-4e2a-aeaa-28f894f29b0b): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:06:42 addons-535714 kubelet[1509]: E1002 07:06:42.085470    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2f677461-445c-4e2a-aeaa-28f894f29b0b"
	Oct 02 07:06:44 addons-535714 kubelet[1509]: I1002 07:06:44.636775    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/43bb5445-e38f-4659-ba34-65c081b7d396-script\") pod \"helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3\" (UID: \"43bb5445-e38f-4659-ba34-65c081b7d396\") " pod="local-path-storage/helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3"
	Oct 02 07:06:44 addons-535714 kubelet[1509]: I1002 07:06:44.636821    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/43bb5445-e38f-4659-ba34-65c081b7d396-data\") pod \"helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3\" (UID: \"43bb5445-e38f-4659-ba34-65c081b7d396\") " pod="local-path-storage/helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3"
	Oct 02 07:06:44 addons-535714 kubelet[1509]: I1002 07:06:44.636881    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtcf\" (UniqueName: \"kubernetes.io/projected/43bb5445-e38f-4659-ba34-65c081b7d396-kube-api-access-nvtcf\") pod \"helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3\" (UID: \"43bb5445-e38f-4659-ba34-65c081b7d396\") " pod="local-path-storage/helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3"
	Oct 02 07:06:51 addons-535714 kubelet[1509]: E1002 07:06:51.751880    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388811751566781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:06:51 addons-535714 kubelet[1509]: E1002 07:06:51.751944    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388811751566781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:06:57 addons-535714 kubelet[1509]: E1002 07:06:57.173720    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="2f677461-445c-4e2a-aeaa-28f894f29b0b"
	Oct 02 07:06:57 addons-535714 kubelet[1509]: I1002 07:06:57.174413    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:07:01 addons-535714 kubelet[1509]: E1002 07:07:01.756594    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388821755524591  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:01 addons-535714 kubelet[1509]: E1002 07:07:01.756748    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388821755524591  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:11 addons-535714 kubelet[1509]: E1002 07:07:11.760453    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388831759472844  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:11 addons-535714 kubelet[1509]: E1002 07:07:11.760475    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388831759472844  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:21 addons-535714 kubelet[1509]: E1002 07:07:21.763823    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388841762939163  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:21 addons-535714 kubelet[1509]: E1002 07:07:21.763846    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388841762939163  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:31 addons-535714 kubelet[1509]: E1002 07:07:31.768338    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388851767309249  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:31 addons-535714 kubelet[1509]: E1002 07:07:31.768381    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388851767309249  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:07:32 addons-535714 kubelet[1509]: I1002 07:07:32.174301    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-f7qcs" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0] <==
	W1002 07:07:10.510323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:12.514974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:12.523553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:14.527785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:14.533444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:16.537585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:16.545666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:18.549224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:18.554583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:20.557643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:20.573310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:22.583510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:22.589297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:24.593364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:24.602286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:26.605950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:26.611502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:28.614797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:28.622243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:30.625879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:30.631498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:32.635343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:32.644603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:34.648337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:07:34.654706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
helpers_test.go:269: (dbg) Run:  kubectl --context addons-535714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3: exit status 1 (86.70486ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:01:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxhkh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jxhkh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m23s                  default-scheduler  Successfully assigned default/nginx to addons-535714
	  Warning  Failed     4m38s (x2 over 5m20s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     114s (x3 over 5m20s)   kubelet            Error: ErrImagePull
	  Warning  Failed     114s                   kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    85s (x4 over 5m19s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     85s (x4 over 5m19s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    72s (x4 over 6m22s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:02:40 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-znf77 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-znf77:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m55s                default-scheduler  Successfully assigned default/task-pv-pod to addons-535714
	  Warning  Failed     53s (x2 over 2m54s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     53s (x2 over 2m54s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    38s (x2 over 2m53s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     38s (x2 over 2m53s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    23s (x3 over 4m54s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g48lf (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-g48lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jsw7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-46z2n" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (40.264491659s)
--- FAIL: TestAddons/parallel/LocalPath (343.01s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (133.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-hpzfn" [9071de7c-4e8a-43a9-893f-bdbd130175ef] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:337: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-10-02 07:03:32.531688745 +0000 UTC m=+390.699667532
addons_test.go:1047: (dbg) Run:  kubectl --context addons-535714 describe po yakd-dashboard-5ff678cb9-hpzfn -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-535714 describe po yakd-dashboard-5ff678cb9-hpzfn -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-hpzfn
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-535714/192.168.39.164
Start Time:       Thu, 02 Oct 2025 06:58:04 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-hpzfn (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z7pr7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z7pr7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  5m28s                  default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-hpzfn to addons-535714
Warning  Failed     2m59s (x2 over 3m43s)  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     107s (x3 over 3m43s)   kubelet            Error: ErrImagePull
Warning  Failed     107s                   kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    77s (x4 over 3m43s)    kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     77s (x4 over 3m43s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    62s (x4 over 5m24s)    kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
addons_test.go:1047: (dbg) Run:  kubectl --context addons-535714 logs yakd-dashboard-5ff678cb9-hpzfn -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-535714 logs yakd-dashboard-5ff678cb9-hpzfn -n yakd-dashboard: exit status 1 (87.336307ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-hpzfn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-535714 logs yakd-dashboard-5ff678cb9-hpzfn -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-535714 -n addons-535714
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 logs -n 25: (1.522337534s)
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-760196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ -o=json --download-only -p download-only-169608 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-169608                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ --download-only -p binary-mirror-257523 --alsologtostderr --binary-mirror http://127.0.0.1:33567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ -p binary-mirror-257523                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-257523 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ addons  │ enable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ start   │ -p addons-535714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:00 UTC │ 02 Oct 25 07:00 UTC │
	│ addons  │ addons-535714 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ enable headlamp -p addons-535714 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-535714                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-535714 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ ip      │ addons-535714 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-535714 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-535714        │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:57:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:57:12.613104  566681 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:57:12.613401  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613412  566681 out.go:374] Setting ErrFile to fd 2...
	I1002 06:57:12.613416  566681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:12.613691  566681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 06:57:12.614327  566681 out.go:368] Setting JSON to false
	I1002 06:57:12.615226  566681 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49183,"bootTime":1759339050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:57:12.615318  566681 start.go:140] virtualization: kvm guest
	I1002 06:57:12.616912  566681 out.go:179] * [addons-535714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:57:12.618030  566681 notify.go:220] Checking for updates...
	I1002 06:57:12.618070  566681 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:57:12.619267  566681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:57:12.620404  566681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:12.621815  566681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:12.622922  566681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:57:12.623998  566681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:57:12.625286  566681 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:57:12.655279  566681 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 06:57:12.656497  566681 start.go:304] selected driver: kvm2
	I1002 06:57:12.656511  566681 start.go:924] validating driver "kvm2" against <nil>
	I1002 06:57:12.656523  566681 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:57:12.657469  566681 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.657563  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.671466  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.671499  566681 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:12.684735  566681 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:12.684785  566681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:57:12.685037  566681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:57:12.685069  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:12.685110  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:12.685121  566681 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 06:57:12.685226  566681 start.go:348] cluster config:
	{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1002 06:57:12.685336  566681 iso.go:125] acquiring lock: {Name:mkf098c9edb59acf17bed04e42333d4ed092b943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:12.687549  566681 out.go:179] * Starting "addons-535714" primary control-plane node in "addons-535714" cluster
	I1002 06:57:12.688758  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:12.688809  566681 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 06:57:12.688824  566681 cache.go:58] Caching tarball of preloaded images
	I1002 06:57:12.688927  566681 preload.go:233] Found /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 06:57:12.688941  566681 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:57:12.689355  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:12.689385  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json: {Name:mkd226c1b0f282f7928061e8123511cda66ecb61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:12.689560  566681 start.go:360] acquireMachinesLock for addons-535714: {Name:mk200887a2360c0adfa27edc65d8cb08bb2838a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 06:57:12.689631  566681 start.go:364] duration metric: took 53.377µs to acquireMachinesLock for "addons-535714"
	I1002 06:57:12.689654  566681 start.go:93] Provisioning new machine with config: &{Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:12.689738  566681 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 06:57:12.691999  566681 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 06:57:12.692183  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:12.692244  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:12.705101  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I1002 06:57:12.705724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:12.706300  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:12.706320  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:12.706770  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:12.707010  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:12.707209  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:12.707401  566681 start.go:159] libmachine.API.Create for "addons-535714" (driver="kvm2")
	I1002 06:57:12.707450  566681 client.go:168] LocalClient.Create starting
	I1002 06:57:12.707494  566681 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem
	I1002 06:57:12.888250  566681 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem
	I1002 06:57:13.081005  566681 main.go:141] libmachine: Running pre-create checks...
	I1002 06:57:13.081030  566681 main.go:141] libmachine: (addons-535714) Calling .PreCreateCheck
	I1002 06:57:13.081598  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:13.082053  566681 main.go:141] libmachine: Creating machine...
	I1002 06:57:13.082069  566681 main.go:141] libmachine: (addons-535714) Calling .Create
	I1002 06:57:13.082276  566681 main.go:141] libmachine: (addons-535714) creating domain...
	I1002 06:57:13.082300  566681 main.go:141] libmachine: (addons-535714) creating network...
	I1002 06:57:13.083762  566681 main.go:141] libmachine: (addons-535714) DBG | found existing default network
	I1002 06:57:13.084004  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084021  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>default</name>
	I1002 06:57:13.084029  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 06:57:13.084036  566681 main.go:141] libmachine: (addons-535714) DBG |   <forward mode='nat'>
	I1002 06:57:13.084041  566681 main.go:141] libmachine: (addons-535714) DBG |     <nat>
	I1002 06:57:13.084047  566681 main.go:141] libmachine: (addons-535714) DBG |       <port start='1024' end='65535'/>
	I1002 06:57:13.084051  566681 main.go:141] libmachine: (addons-535714) DBG |     </nat>
	I1002 06:57:13.084055  566681 main.go:141] libmachine: (addons-535714) DBG |   </forward>
	I1002 06:57:13.084061  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 06:57:13.084068  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 06:57:13.084084  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 06:57:13.084098  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084111  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 06:57:13.084123  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084131  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084152  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084191  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084749  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.084601  566709 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1002 06:57:13.084771  566681 main.go:141] libmachine: (addons-535714) DBG | defining private network:
	I1002 06:57:13.084780  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.084785  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.084801  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.084820  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.084831  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.084840  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.084851  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.084861  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.084868  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.084878  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.084888  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.090767  566681 main.go:141] libmachine: (addons-535714) DBG | creating private network mk-addons-535714 192.168.39.0/24...
	I1002 06:57:13.158975  566681 main.go:141] libmachine: (addons-535714) DBG | private network mk-addons-535714 192.168.39.0/24 created
	I1002 06:57:13.159275  566681 main.go:141] libmachine: (addons-535714) DBG | <network>
	I1002 06:57:13.159307  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>mk-addons-535714</name>
	I1002 06:57:13.159316  566681 main.go:141] libmachine: (addons-535714) setting up store path in /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.159335  566681 main.go:141] libmachine: (addons-535714) building disk image from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 06:57:13.159343  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>30f68bcb-0ec3-45ac-9012-251c5feb215b</uuid>
	I1002 06:57:13.159350  566681 main.go:141] libmachine: (addons-535714) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 06:57:13.159356  566681 main.go:141] libmachine: (addons-535714) DBG |   <mac address='52:54:00:03:a3:ce'/>
	I1002 06:57:13.159360  566681 main.go:141] libmachine: (addons-535714) DBG |   <dns enable='no'/>
	I1002 06:57:13.159383  566681 main.go:141] libmachine: (addons-535714) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 06:57:13.159402  566681 main.go:141] libmachine: (addons-535714) DBG |     <dhcp>
	I1002 06:57:13.159413  566681 main.go:141] libmachine: (addons-535714) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 06:57:13.159428  566681 main.go:141] libmachine: (addons-535714) DBG |     </dhcp>
	I1002 06:57:13.159461  566681 main.go:141] libmachine: (addons-535714) Downloading /home/jenkins/minikube-integration/21643-562157/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 06:57:13.159477  566681 main.go:141] libmachine: (addons-535714) DBG |   </ip>
	I1002 06:57:13.159489  566681 main.go:141] libmachine: (addons-535714) DBG | </network>
	I1002 06:57:13.159500  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:13.159522  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.159293  566709 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.427161  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.426986  566709 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa...
	I1002 06:57:13.691596  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691434  566709 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk...
	I1002 06:57:13.691620  566681 main.go:141] libmachine: (addons-535714) DBG | Writing magic tar header
	I1002 06:57:13.691651  566681 main.go:141] libmachine: (addons-535714) DBG | Writing SSH key tar header
	I1002 06:57:13.691660  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:13.691559  566709 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 ...
	I1002 06:57:13.691671  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714
	I1002 06:57:13.691678  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube/machines
	I1002 06:57:13.691687  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:13.691694  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21643-562157
	I1002 06:57:13.691702  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 06:57:13.691710  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home/jenkins
	I1002 06:57:13.691724  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714 (perms=drwx------)
	I1002 06:57:13.691738  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube/machines (perms=drwxr-xr-x)
	I1002 06:57:13.691747  566681 main.go:141] libmachine: (addons-535714) DBG | checking permissions on dir: /home
	I1002 06:57:13.691758  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157/.minikube (perms=drwxr-xr-x)
	I1002 06:57:13.691769  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration/21643-562157 (perms=drwxrwxr-x)
	I1002 06:57:13.691781  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 06:57:13.691789  566681 main.go:141] libmachine: (addons-535714) DBG | skipping /home - not owner
	I1002 06:57:13.691803  566681 main.go:141] libmachine: (addons-535714) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 06:57:13.691811  566681 main.go:141] libmachine: (addons-535714) defining domain...
	I1002 06:57:13.693046  566681 main.go:141] libmachine: (addons-535714) defining domain using XML: 
	I1002 06:57:13.693074  566681 main.go:141] libmachine: (addons-535714) <domain type='kvm'>
	I1002 06:57:13.693080  566681 main.go:141] libmachine: (addons-535714)   <name>addons-535714</name>
	I1002 06:57:13.693085  566681 main.go:141] libmachine: (addons-535714)   <memory unit='MiB'>4096</memory>
	I1002 06:57:13.693090  566681 main.go:141] libmachine: (addons-535714)   <vcpu>2</vcpu>
	I1002 06:57:13.693093  566681 main.go:141] libmachine: (addons-535714)   <features>
	I1002 06:57:13.693098  566681 main.go:141] libmachine: (addons-535714)     <acpi/>
	I1002 06:57:13.693102  566681 main.go:141] libmachine: (addons-535714)     <apic/>
	I1002 06:57:13.693109  566681 main.go:141] libmachine: (addons-535714)     <pae/>
	I1002 06:57:13.693115  566681 main.go:141] libmachine: (addons-535714)   </features>
	I1002 06:57:13.693124  566681 main.go:141] libmachine: (addons-535714)   <cpu mode='host-passthrough'>
	I1002 06:57:13.693132  566681 main.go:141] libmachine: (addons-535714)   </cpu>
	I1002 06:57:13.693155  566681 main.go:141] libmachine: (addons-535714)   <os>
	I1002 06:57:13.693163  566681 main.go:141] libmachine: (addons-535714)     <type>hvm</type>
	I1002 06:57:13.693172  566681 main.go:141] libmachine: (addons-535714)     <boot dev='cdrom'/>
	I1002 06:57:13.693186  566681 main.go:141] libmachine: (addons-535714)     <boot dev='hd'/>
	I1002 06:57:13.693192  566681 main.go:141] libmachine: (addons-535714)     <bootmenu enable='no'/>
	I1002 06:57:13.693197  566681 main.go:141] libmachine: (addons-535714)   </os>
	I1002 06:57:13.693202  566681 main.go:141] libmachine: (addons-535714)   <devices>
	I1002 06:57:13.693207  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='cdrom'>
	I1002 06:57:13.693215  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.693220  566681 main.go:141] libmachine: (addons-535714)       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.693225  566681 main.go:141] libmachine: (addons-535714)       <readonly/>
	I1002 06:57:13.693231  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693240  566681 main.go:141] libmachine: (addons-535714)     <disk type='file' device='disk'>
	I1002 06:57:13.693255  566681 main.go:141] libmachine: (addons-535714)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 06:57:13.693309  566681 main.go:141] libmachine: (addons-535714)       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.693334  566681 main.go:141] libmachine: (addons-535714)       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.693341  566681 main.go:141] libmachine: (addons-535714)     </disk>
	I1002 06:57:13.693357  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693371  566681 main.go:141] libmachine: (addons-535714)       <source network='mk-addons-535714'/>
	I1002 06:57:13.693378  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693391  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693399  566681 main.go:141] libmachine: (addons-535714)     <interface type='network'>
	I1002 06:57:13.693411  566681 main.go:141] libmachine: (addons-535714)       <source network='default'/>
	I1002 06:57:13.693416  566681 main.go:141] libmachine: (addons-535714)       <model type='virtio'/>
	I1002 06:57:13.693435  566681 main.go:141] libmachine: (addons-535714)     </interface>
	I1002 06:57:13.693445  566681 main.go:141] libmachine: (addons-535714)     <serial type='pty'>
	I1002 06:57:13.693480  566681 main.go:141] libmachine: (addons-535714)       <target port='0'/>
	I1002 06:57:13.693520  566681 main.go:141] libmachine: (addons-535714)     </serial>
	I1002 06:57:13.693540  566681 main.go:141] libmachine: (addons-535714)     <console type='pty'>
	I1002 06:57:13.693552  566681 main.go:141] libmachine: (addons-535714)       <target type='serial' port='0'/>
	I1002 06:57:13.693564  566681 main.go:141] libmachine: (addons-535714)     </console>
	I1002 06:57:13.693575  566681 main.go:141] libmachine: (addons-535714)     <rng model='virtio'>
	I1002 06:57:13.693588  566681 main.go:141] libmachine: (addons-535714)       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.693598  566681 main.go:141] libmachine: (addons-535714)     </rng>
	I1002 06:57:13.693609  566681 main.go:141] libmachine: (addons-535714)   </devices>
	I1002 06:57:13.693618  566681 main.go:141] libmachine: (addons-535714) </domain>
	I1002 06:57:13.693631  566681 main.go:141] libmachine: (addons-535714) 
	I1002 06:57:13.698471  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:ff:9b:2c in network default
	I1002 06:57:13.699181  566681 main.go:141] libmachine: (addons-535714) starting domain...
	I1002 06:57:13.699210  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:13.699219  566681 main.go:141] libmachine: (addons-535714) ensuring networks are active...
	I1002 06:57:13.699886  566681 main.go:141] libmachine: (addons-535714) Ensuring network default is active
	I1002 06:57:13.700240  566681 main.go:141] libmachine: (addons-535714) Ensuring network mk-addons-535714 is active
	I1002 06:57:13.700911  566681 main.go:141] libmachine: (addons-535714) getting domain XML...
	I1002 06:57:13.701998  566681 main.go:141] libmachine: (addons-535714) DBG | starting domain XML:
	I1002 06:57:13.702019  566681 main.go:141] libmachine: (addons-535714) DBG | <domain type='kvm'>
	I1002 06:57:13.702029  566681 main.go:141] libmachine: (addons-535714) DBG |   <name>addons-535714</name>
	I1002 06:57:13.702036  566681 main.go:141] libmachine: (addons-535714) DBG |   <uuid>26ed18e3-cae3-43e2-ba2a-85be4a0a7371</uuid>
	I1002 06:57:13.702049  566681 main.go:141] libmachine: (addons-535714) DBG |   <memory unit='KiB'>4194304</memory>
	I1002 06:57:13.702060  566681 main.go:141] libmachine: (addons-535714) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1002 06:57:13.702069  566681 main.go:141] libmachine: (addons-535714) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 06:57:13.702075  566681 main.go:141] libmachine: (addons-535714) DBG |   <os>
	I1002 06:57:13.702085  566681 main.go:141] libmachine: (addons-535714) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 06:57:13.702093  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='cdrom'/>
	I1002 06:57:13.702101  566681 main.go:141] libmachine: (addons-535714) DBG |     <boot dev='hd'/>
	I1002 06:57:13.702116  566681 main.go:141] libmachine: (addons-535714) DBG |     <bootmenu enable='no'/>
	I1002 06:57:13.702127  566681 main.go:141] libmachine: (addons-535714) DBG |   </os>
	I1002 06:57:13.702134  566681 main.go:141] libmachine: (addons-535714) DBG |   <features>
	I1002 06:57:13.702180  566681 main.go:141] libmachine: (addons-535714) DBG |     <acpi/>
	I1002 06:57:13.702204  566681 main.go:141] libmachine: (addons-535714) DBG |     <apic/>
	I1002 06:57:13.702215  566681 main.go:141] libmachine: (addons-535714) DBG |     <pae/>
	I1002 06:57:13.702220  566681 main.go:141] libmachine: (addons-535714) DBG |   </features>
	I1002 06:57:13.702241  566681 main.go:141] libmachine: (addons-535714) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 06:57:13.702256  566681 main.go:141] libmachine: (addons-535714) DBG |   <clock offset='utc'/>
	I1002 06:57:13.702265  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 06:57:13.702283  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_reboot>restart</on_reboot>
	I1002 06:57:13.702295  566681 main.go:141] libmachine: (addons-535714) DBG |   <on_crash>destroy</on_crash>
	I1002 06:57:13.702305  566681 main.go:141] libmachine: (addons-535714) DBG |   <devices>
	I1002 06:57:13.702317  566681 main.go:141] libmachine: (addons-535714) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 06:57:13.702328  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='cdrom'>
	I1002 06:57:13.702340  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw'/>
	I1002 06:57:13.702352  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/boot2docker.iso'/>
	I1002 06:57:13.702364  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 06:57:13.702375  566681 main.go:141] libmachine: (addons-535714) DBG |       <readonly/>
	I1002 06:57:13.702387  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 06:57:13.702398  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702419  566681 main.go:141] libmachine: (addons-535714) DBG |     <disk type='file' device='disk'>
	I1002 06:57:13.702432  566681 main.go:141] libmachine: (addons-535714) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 06:57:13.702451  566681 main.go:141] libmachine: (addons-535714) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/addons-535714.rawdisk'/>
	I1002 06:57:13.702462  566681 main.go:141] libmachine: (addons-535714) DBG |       <target dev='hda' bus='virtio'/>
	I1002 06:57:13.702472  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 06:57:13.702482  566681 main.go:141] libmachine: (addons-535714) DBG |     </disk>
	I1002 06:57:13.702490  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 06:57:13.702503  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 06:57:13.702512  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702521  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 06:57:13.702535  566681 main.go:141] libmachine: (addons-535714) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 06:57:13.702589  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 06:57:13.702612  566681 main.go:141] libmachine: (addons-535714) DBG |     </controller>
	I1002 06:57:13.702624  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702630  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:00:74:bc'/>
	I1002 06:57:13.702639  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='mk-addons-535714'/>
	I1002 06:57:13.702646  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702658  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 06:57:13.702665  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702675  566681 main.go:141] libmachine: (addons-535714) DBG |     <interface type='network'>
	I1002 06:57:13.702687  566681 main.go:141] libmachine: (addons-535714) DBG |       <mac address='52:54:00:ff:9b:2c'/>
	I1002 06:57:13.702697  566681 main.go:141] libmachine: (addons-535714) DBG |       <source network='default'/>
	I1002 06:57:13.702707  566681 main.go:141] libmachine: (addons-535714) DBG |       <model type='virtio'/>
	I1002 06:57:13.702719  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 06:57:13.702730  566681 main.go:141] libmachine: (addons-535714) DBG |     </interface>
	I1002 06:57:13.702740  566681 main.go:141] libmachine: (addons-535714) DBG |     <serial type='pty'>
	I1002 06:57:13.702751  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='isa-serial' port='0'>
	I1002 06:57:13.702765  566681 main.go:141] libmachine: (addons-535714) DBG |         <model name='isa-serial'/>
	I1002 06:57:13.702775  566681 main.go:141] libmachine: (addons-535714) DBG |       </target>
	I1002 06:57:13.702784  566681 main.go:141] libmachine: (addons-535714) DBG |     </serial>
	I1002 06:57:13.702806  566681 main.go:141] libmachine: (addons-535714) DBG |     <console type='pty'>
	I1002 06:57:13.702820  566681 main.go:141] libmachine: (addons-535714) DBG |       <target type='serial' port='0'/>
	I1002 06:57:13.702827  566681 main.go:141] libmachine: (addons-535714) DBG |     </console>
	I1002 06:57:13.702839  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='mouse' bus='ps2'/>
	I1002 06:57:13.702850  566681 main.go:141] libmachine: (addons-535714) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 06:57:13.702861  566681 main.go:141] libmachine: (addons-535714) DBG |     <audio id='1' type='none'/>
	I1002 06:57:13.702881  566681 main.go:141] libmachine: (addons-535714) DBG |     <memballoon model='virtio'>
	I1002 06:57:13.702895  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 06:57:13.702901  566681 main.go:141] libmachine: (addons-535714) DBG |     </memballoon>
	I1002 06:57:13.702910  566681 main.go:141] libmachine: (addons-535714) DBG |     <rng model='virtio'>
	I1002 06:57:13.702918  566681 main.go:141] libmachine: (addons-535714) DBG |       <backend model='random'>/dev/random</backend>
	I1002 06:57:13.702929  566681 main.go:141] libmachine: (addons-535714) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 06:57:13.702944  566681 main.go:141] libmachine: (addons-535714) DBG |     </rng>
	I1002 06:57:13.702957  566681 main.go:141] libmachine: (addons-535714) DBG |   </devices>
	I1002 06:57:13.702972  566681 main.go:141] libmachine: (addons-535714) DBG | </domain>
	I1002 06:57:13.702987  566681 main.go:141] libmachine: (addons-535714) DBG | 
	I1002 06:57:14.963247  566681 main.go:141] libmachine: (addons-535714) waiting for domain to start...
	I1002 06:57:14.964664  566681 main.go:141] libmachine: (addons-535714) domain is now running
	I1002 06:57:14.964695  566681 main.go:141] libmachine: (addons-535714) waiting for IP...
	I1002 06:57:14.965420  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:14.966032  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:14.966060  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:14.966362  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:14.966431  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:14.966367  566709 retry.go:31] will retry after 210.201926ms: waiting for domain to come up
	I1002 06:57:15.178058  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.178797  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.178832  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.179051  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.179089  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.179030  566709 retry.go:31] will retry after 312.318729ms: waiting for domain to come up
	I1002 06:57:15.493036  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.493844  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.493865  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.494158  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.494260  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.494172  566709 retry.go:31] will retry after 379.144998ms: waiting for domain to come up
	I1002 06:57:15.874866  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:15.875597  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:15.875618  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:15.875940  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:15.875972  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:15.875891  566709 retry.go:31] will retry after 392.719807ms: waiting for domain to come up
	I1002 06:57:16.270678  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.271369  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.271417  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.271795  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.271822  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.271752  566709 retry.go:31] will retry after 502.852746ms: waiting for domain to come up
	I1002 06:57:16.776382  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:16.777033  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:16.777083  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:16.777418  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:16.777452  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:16.777390  566709 retry.go:31] will retry after 817.041708ms: waiting for domain to come up
	I1002 06:57:17.596403  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:17.597002  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:17.597037  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:17.597304  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:17.597337  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:17.597286  566709 retry.go:31] will retry after 1.129250566s: waiting for domain to come up
	I1002 06:57:18.728727  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:18.729410  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:18.729438  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:18.729739  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:18.729770  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:18.729716  566709 retry.go:31] will retry after 1.486801145s: waiting for domain to come up
	I1002 06:57:20.218801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:20.219514  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:20.219546  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:20.219811  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:20.219864  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:20.219802  566709 retry.go:31] will retry after 1.676409542s: waiting for domain to come up
	I1002 06:57:21.898812  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:21.899513  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:21.899536  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:21.899819  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:21.899877  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:21.899808  566709 retry.go:31] will retry after 1.43578276s: waiting for domain to come up
	I1002 06:57:23.337598  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:23.338214  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:23.338235  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:23.338569  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:23.338642  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:23.338553  566709 retry.go:31] will retry after 2.182622976s: waiting for domain to come up
	I1002 06:57:25.524305  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:25.524996  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:25.525030  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:25.525352  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:25.525383  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:25.525329  566709 retry.go:31] will retry after 2.567637867s: waiting for domain to come up
	I1002 06:57:28.094839  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:28.095351  566681 main.go:141] libmachine: (addons-535714) DBG | no network interface addresses found for domain addons-535714 (source=lease)
	I1002 06:57:28.095371  566681 main.go:141] libmachine: (addons-535714) DBG | trying to list again with source=arp
	I1002 06:57:28.095666  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find current IP address of domain addons-535714 in network mk-addons-535714 (interfaces detected: [])
	I1002 06:57:28.095696  566681 main.go:141] libmachine: (addons-535714) DBG | I1002 06:57:28.095635  566709 retry.go:31] will retry after 3.838879921s: waiting for domain to come up
	I1002 06:57:31.938799  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939560  566681 main.go:141] libmachine: (addons-535714) found domain IP: 192.168.39.164
	I1002 06:57:31.939593  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has current primary IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:31.939601  566681 main.go:141] libmachine: (addons-535714) reserving static IP address...
	I1002 06:57:31.940101  566681 main.go:141] libmachine: (addons-535714) DBG | unable to find host DHCP lease matching {name: "addons-535714", mac: "52:54:00:00:74:bc", ip: "192.168.39.164"} in network mk-addons-535714
	I1002 06:57:32.153010  566681 main.go:141] libmachine: (addons-535714) DBG | Getting to WaitForSSH function...
	I1002 06:57:32.153043  566681 main.go:141] libmachine: (addons-535714) reserved static IP address 192.168.39.164 for domain addons-535714
	I1002 06:57:32.153056  566681 main.go:141] libmachine: (addons-535714) waiting for SSH...
	I1002 06:57:32.156675  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157263  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.157288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.157522  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH client type: external
	I1002 06:57:32.157548  566681 main.go:141] libmachine: (addons-535714) DBG | Using SSH private key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa (-rw-------)
	I1002 06:57:32.157582  566681 main.go:141] libmachine: (addons-535714) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 06:57:32.157609  566681 main.go:141] libmachine: (addons-535714) DBG | About to run SSH command:
	I1002 06:57:32.157620  566681 main.go:141] libmachine: (addons-535714) DBG | exit 0
	I1002 06:57:32.286418  566681 main.go:141] libmachine: (addons-535714) DBG | SSH cmd err, output: <nil>: 
	I1002 06:57:32.286733  566681 main.go:141] libmachine: (addons-535714) domain creation complete
	I1002 06:57:32.287044  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:32.287640  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288020  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:32.288207  566681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 06:57:32.288223  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:32.289782  566681 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 06:57:32.289795  566681 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 06:57:32.289800  566681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 06:57:32.289805  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.292433  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.292851  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.292897  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.293050  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.293317  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293481  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.293658  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.293813  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.294063  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.294076  566681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 06:57:32.392654  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.392681  566681 main.go:141] libmachine: Detecting the provisioner...
	I1002 06:57:32.392690  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.396029  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396454  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.396486  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.396681  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.396903  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397079  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.397260  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.397412  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.397680  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.397696  566681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 06:57:32.501992  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 06:57:32.502093  566681 main.go:141] libmachine: found compatible host: buildroot
	I1002 06:57:32.502117  566681 main.go:141] libmachine: Provisioning with buildroot...
	I1002 06:57:32.502131  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502439  566681 buildroot.go:166] provisioning hostname "addons-535714"
	I1002 06:57:32.502476  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.502701  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.506170  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506653  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.506716  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.506786  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.507040  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507252  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.507426  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.507729  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.507997  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.508013  566681 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-535714 && echo "addons-535714" | sudo tee /etc/hostname
	I1002 06:57:32.632360  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-535714
	
	I1002 06:57:32.632404  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.635804  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636293  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.636319  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.636574  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:32.636804  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.636969  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:32.637110  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:32.637297  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:32.637584  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:32.637613  566681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-535714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-535714/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-535714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:57:32.752063  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:57:32.752119  566681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21643-562157/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-562157/.minikube}
	I1002 06:57:32.752193  566681 buildroot.go:174] setting up certificates
	I1002 06:57:32.752210  566681 provision.go:84] configureAuth start
	I1002 06:57:32.752256  566681 main.go:141] libmachine: (addons-535714) Calling .GetMachineName
	I1002 06:57:32.752721  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:32.756026  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756514  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.756545  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.756704  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:32.759506  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.759945  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:32.759972  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:32.760113  566681 provision.go:143] copyHostCerts
	I1002 06:57:32.760210  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem (1123 bytes)
	I1002 06:57:32.760331  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem (1675 bytes)
	I1002 06:57:32.760392  566681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem (1078 bytes)
	I1002 06:57:32.760440  566681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem org=jenkins.addons-535714 san=[127.0.0.1 192.168.39.164 addons-535714 localhost minikube]
	I1002 06:57:32.997259  566681 provision.go:177] copyRemoteCerts
	I1002 06:57:32.997339  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:57:32.997365  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.001746  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002246  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.002275  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.002606  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.002841  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.003067  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.003261  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.087811  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:57:33.120074  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:57:33.152344  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:57:33.183560  566681 provision.go:87] duration metric: took 431.305231ms to configureAuth
	I1002 06:57:33.183592  566681 buildroot.go:189] setting minikube options for container-runtime
	I1002 06:57:33.183785  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:33.183901  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.187438  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.187801  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.187825  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.188034  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.188285  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188508  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.188682  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.188927  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.189221  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.189246  566681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:57:33.455871  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:57:33.455896  566681 main.go:141] libmachine: Checking connection to Docker...
	I1002 06:57:33.455904  566681 main.go:141] libmachine: (addons-535714) Calling .GetURL
	I1002 06:57:33.457296  566681 main.go:141] libmachine: (addons-535714) DBG | using libvirt version 8000000
	I1002 06:57:33.460125  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460550  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.460582  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.460738  566681 main.go:141] libmachine: Docker is up and running!
	I1002 06:57:33.460770  566681 main.go:141] libmachine: Reticulating splines...
	I1002 06:57:33.460780  566681 client.go:171] duration metric: took 20.753318284s to LocalClient.Create
	I1002 06:57:33.460805  566681 start.go:167] duration metric: took 20.753406484s to libmachine.API.Create "addons-535714"
	I1002 06:57:33.460815  566681 start.go:293] postStartSetup for "addons-535714" (driver="kvm2")
	I1002 06:57:33.460824  566681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:57:33.460841  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.461104  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:57:33.461149  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.463666  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464001  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.464024  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.464278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.464486  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.464662  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.464805  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.547032  566681 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:57:33.552379  566681 info.go:137] Remote host: Buildroot 2025.02
	I1002 06:57:33.552408  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/addons for local assets ...
	I1002 06:57:33.552489  566681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/files for local assets ...
	I1002 06:57:33.552524  566681 start.go:296] duration metric: took 91.702797ms for postStartSetup
	I1002 06:57:33.552573  566681 main.go:141] libmachine: (addons-535714) Calling .GetConfigRaw
	I1002 06:57:33.553229  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.556294  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556659  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.556691  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.556979  566681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/config.json ...
	I1002 06:57:33.557200  566681 start.go:128] duration metric: took 20.867433906s to createHost
	I1002 06:57:33.557235  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.559569  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.559976  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.560033  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.560209  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.560387  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560524  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.560647  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.560782  566681 main.go:141] libmachine: Using SSH client type: native
	I1002 06:57:33.561006  566681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I1002 06:57:33.561024  566681 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 06:57:33.663941  566681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759388253.625480282
	
	I1002 06:57:33.663966  566681 fix.go:216] guest clock: 1759388253.625480282
	I1002 06:57:33.663974  566681 fix.go:229] Guest: 2025-10-02 06:57:33.625480282 +0000 UTC Remote: 2025-10-02 06:57:33.557215192 +0000 UTC m=+20.980868887 (delta=68.26509ms)
	I1002 06:57:33.664010  566681 fix.go:200] guest clock delta is within tolerance: 68.26509ms
	I1002 06:57:33.664022  566681 start.go:83] releasing machines lock for "addons-535714", held for 20.974372731s
	I1002 06:57:33.664050  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.664374  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:33.667827  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668310  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.668344  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.668518  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669079  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669275  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:33.669418  566681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:57:33.669466  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.669473  566681 ssh_runner.go:195] Run: cat /version.json
	I1002 06:57:33.669492  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:33.672964  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673168  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673457  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673495  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673642  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.673670  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:33.673878  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674001  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:33.674093  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674177  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:33.674268  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.674352  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:33.674502  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:33.752747  566681 ssh_runner.go:195] Run: systemctl --version
	I1002 06:57:33.777712  566681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:57:33.941402  566681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:57:33.949414  566681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:57:33.949490  566681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:57:33.971089  566681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 06:57:33.971121  566681 start.go:495] detecting cgroup driver to use...
	I1002 06:57:33.971215  566681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:57:33.990997  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:57:34.009642  566681 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:57:34.009719  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:57:34.028675  566681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:57:34.045011  566681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:57:34.191090  566681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:57:34.404836  566681 docker.go:234] disabling docker service ...
	I1002 06:57:34.404915  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:57:34.421846  566681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:57:34.437815  566681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:57:34.593256  566681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:57:34.739807  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:57:34.755656  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:57:34.780318  566681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:57:34.780381  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.794344  566681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 06:57:34.794437  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.807921  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.821174  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.834265  566681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:57:34.848039  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.861013  566681 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.882928  566681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:57:34.895874  566681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:57:34.906834  566681 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 06:57:34.906902  566681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 06:57:34.930283  566681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:57:34.944196  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:35.086744  566681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:57:35.203118  566681 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:57:35.203247  566681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:57:35.208872  566681 start.go:563] Will wait 60s for crictl version
	I1002 06:57:35.208951  566681 ssh_runner.go:195] Run: which crictl
	I1002 06:57:35.213165  566681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 06:57:35.254690  566681 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 06:57:35.254809  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.285339  566681 ssh_runner.go:195] Run: crio --version
	I1002 06:57:35.318360  566681 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 06:57:35.319680  566681 main.go:141] libmachine: (addons-535714) Calling .GetIP
	I1002 06:57:35.322840  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323187  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:35.323215  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:35.323541  566681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 06:57:35.328294  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:35.344278  566681 kubeadm.go:883] updating cluster {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:57:35.344381  566681 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:57:35.344426  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:35.382419  566681 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 06:57:35.382487  566681 ssh_runner.go:195] Run: which lz4
	I1002 06:57:35.386980  566681 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 06:57:35.392427  566681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 06:57:35.392457  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1002 06:57:36.901929  566681 crio.go:462] duration metric: took 1.514994717s to copy over tarball
	I1002 06:57:36.902020  566681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 06:57:38.487982  566681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.585912508s)
	I1002 06:57:38.488018  566681 crio.go:469] duration metric: took 1.586055344s to extract the tarball
	I1002 06:57:38.488028  566681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 06:57:38.530041  566681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:57:38.574743  566681 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:57:38.574771  566681 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:57:38.574780  566681 kubeadm.go:934] updating node { 192.168.39.164 8443 v1.34.1 crio true true} ...
	I1002 06:57:38.574907  566681 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-535714 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:57:38.574982  566681 ssh_runner.go:195] Run: crio config
	I1002 06:57:38.626077  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:38.626100  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:38.626114  566681 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:57:38.626157  566681 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-535714 NodeName:addons-535714 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:57:38.626290  566681 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-535714"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.164"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:57:38.626379  566681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:57:38.638875  566681 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:57:38.638942  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:57:38.650923  566681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1002 06:57:38.672765  566681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:57:38.695198  566681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 06:57:38.716738  566681 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I1002 06:57:38.721153  566681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:57:38.736469  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:38.882003  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:38.903662  566681 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714 for IP: 192.168.39.164
	I1002 06:57:38.903695  566681 certs.go:195] generating shared ca certs ...
	I1002 06:57:38.903722  566681 certs.go:227] acquiring lock for ca certs: {Name:mk8e87648e070d331709ecc08a93a441c20cc0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.903919  566681 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key
	I1002 06:57:38.961629  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt ...
	I1002 06:57:38.961659  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt: {Name:mkce3dd067e2e7843e2a288d28dbaf57f057aeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961829  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key ...
	I1002 06:57:38.961841  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key: {Name:mka327360c05168b3164194068242bb15d511ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:38.961939  566681 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key
	I1002 06:57:39.050167  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt ...
	I1002 06:57:39.050199  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt: {Name:mkf18fa19ddf5ebcd4669a9a2e369e414c03725b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050375  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key ...
	I1002 06:57:39.050388  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key: {Name:mk774f61354e64c5344d2d0d059164fff9076c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.050460  566681 certs.go:257] generating profile certs ...
	I1002 06:57:39.050516  566681 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key
	I1002 06:57:39.050537  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt with IP's: []
	I1002 06:57:39.147298  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt ...
	I1002 06:57:39.147330  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: {Name:mk17b498d515b2f43666faa03b17d7223c9a8157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147495  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key ...
	I1002 06:57:39.147505  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.key: {Name:mke1e8140b8916f87dd85d98abe8a51503f6e4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.147578  566681 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed
	I1002 06:57:39.147597  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.164]
	I1002 06:57:39.310236  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed ...
	I1002 06:57:39.310266  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed: {Name:mk247c08955d8ed7427926c7244db21ffe837768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310428  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed ...
	I1002 06:57:39.310441  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed: {Name:mkc3fa16c2fd82a07eac700fa655e28a42c60f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.310525  566681 certs.go:382] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt
	I1002 06:57:39.310624  566681 certs.go:386] copying /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key.f13304ed -> /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key
	I1002 06:57:39.310682  566681 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key
	I1002 06:57:39.310701  566681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt with IP's: []
	I1002 06:57:39.497350  566681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt ...
	I1002 06:57:39.497386  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt: {Name:mk4f28529f4cee1ff8311028b7bb7fc35a77bba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497555  566681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key ...
	I1002 06:57:39.497569  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key: {Name:mkfac0b0a329edb8634114371202cb4ba011c129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:39.497750  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:57:39.497784  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:57:39.497808  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:57:39.497835  566681 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem (1675 bytes)
	I1002 06:57:39.498475  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:57:39.530649  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:57:39.561340  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:57:39.593844  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:57:39.629628  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:57:39.668367  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:57:39.699924  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:57:39.730177  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 06:57:39.761107  566681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:57:39.791592  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:57:39.813294  566681 ssh_runner.go:195] Run: openssl version
	I1002 06:57:39.820587  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:57:39.834664  566681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840283  566681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:57 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.840348  566681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:57:39.848412  566681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:57:39.863027  566681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:57:39.868269  566681 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:57:39.868325  566681 kubeadm.go:400] StartCluster: {Name:addons-535714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-535714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:57:39.868408  566681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:57:39.868500  566681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:57:39.910571  566681 cri.go:89] found id: ""
	I1002 06:57:39.910645  566681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:57:39.923825  566681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:57:39.936522  566681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:57:39.949191  566681 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:57:39.949214  566681 kubeadm.go:157] found existing configuration files:
	
	I1002 06:57:39.949292  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:57:39.961561  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:57:39.961637  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:57:39.974337  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:57:39.986029  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:57:39.986104  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:57:39.997992  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.008894  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:57:40.008966  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:57:40.021235  566681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:57:40.032694  566681 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:57:40.032754  566681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:57:40.045554  566681 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 06:57:40.211362  566681 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:57:51.799597  566681 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:57:51.799689  566681 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:57:51.799798  566681 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:57:51.799950  566681 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:57:51.800082  566681 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:57:51.800206  566681 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:57:51.802349  566681 out.go:252]   - Generating certificates and keys ...
	I1002 06:57:51.802439  566681 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:57:51.802492  566681 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:57:51.802586  566681 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:57:51.802729  566681 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:57:51.802823  566681 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:57:51.802894  566681 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:57:51.802944  566681 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:57:51.803058  566681 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803125  566681 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:57:51.803276  566681 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-535714 localhost] and IPs [192.168.39.164 127.0.0.1 ::1]
	I1002 06:57:51.803350  566681 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:57:51.803420  566681 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:57:51.803491  566681 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:57:51.803557  566681 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:57:51.803634  566681 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:57:51.803717  566681 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:57:51.803807  566681 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:57:51.803899  566681 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:57:51.803950  566681 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:57:51.804029  566681 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:57:51.804088  566681 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:57:51.805702  566681 out.go:252]   - Booting up control plane ...
	I1002 06:57:51.805781  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:57:51.805846  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:57:51.805929  566681 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:57:51.806028  566681 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:57:51.806148  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:57:51.806260  566681 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:57:51.806361  566681 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:57:51.806420  566681 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:57:51.806575  566681 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:57:51.806669  566681 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:57:51.806717  566681 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.672587ms
	I1002 06:57:51.806806  566681 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:57:51.806892  566681 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.164:8443/livez
	I1002 06:57:51.806963  566681 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:57:51.807067  566681 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:57:51.807185  566681 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.362189492s
	I1002 06:57:51.807284  566681 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.802664802s
	I1002 06:57:51.807338  566681 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003805488s
	I1002 06:57:51.807453  566681 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:57:51.807587  566681 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:57:51.807642  566681 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:57:51.807816  566681 kubeadm.go:318] [mark-control-plane] Marking the node addons-535714 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:57:51.807890  566681 kubeadm.go:318] [bootstrap-token] Using token: 7tuk3k.1448ee54qv9op8vd
	I1002 06:57:51.810266  566681 out.go:252]   - Configuring RBAC rules ...
	I1002 06:57:51.810355  566681 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:57:51.810443  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:57:51.810582  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:57:51.810746  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:57:51.810922  566681 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:57:51.811039  566681 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:57:51.811131  566681 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:57:51.811203  566681 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:57:51.811259  566681 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:57:51.811271  566681 kubeadm.go:318] 
	I1002 06:57:51.811321  566681 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:57:51.811327  566681 kubeadm.go:318] 
	I1002 06:57:51.811408  566681 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:57:51.811416  566681 kubeadm.go:318] 
	I1002 06:57:51.811438  566681 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:57:51.811524  566681 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:57:51.811568  566681 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:57:51.811574  566681 kubeadm.go:318] 
	I1002 06:57:51.811638  566681 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:57:51.811650  566681 kubeadm.go:318] 
	I1002 06:57:51.811704  566681 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:57:51.811711  566681 kubeadm.go:318] 
	I1002 06:57:51.811751  566681 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:57:51.811811  566681 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:57:51.811912  566681 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:57:51.811926  566681 kubeadm.go:318] 
	I1002 06:57:51.812042  566681 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:57:51.812153  566681 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:57:51.812165  566681 kubeadm.go:318] 
	I1002 06:57:51.812280  566681 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812417  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 \
	I1002 06:57:51.812453  566681 kubeadm.go:318] 	--control-plane 
	I1002 06:57:51.812464  566681 kubeadm.go:318] 
	I1002 06:57:51.812595  566681 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:57:51.812615  566681 kubeadm.go:318] 
	I1002 06:57:51.812711  566681 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7tuk3k.1448ee54qv9op8vd \
	I1002 06:57:51.812863  566681 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:dba0bc6895d832f1cd30002c0cb93d3c189a3fde25ed4d6da128897e75a53f20 
	I1002 06:57:51.812931  566681 cni.go:84] Creating CNI manager for ""
	I1002 06:57:51.812944  566681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:51.815686  566681 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 06:57:51.817060  566681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 06:57:51.834402  566681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 06:57:51.858951  566681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:57:51.859117  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:51.859124  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-535714 minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-535714 minikube.k8s.io/primary=true
	I1002 06:57:51.921378  566681 ops.go:34] apiserver oom_adj: -16
	I1002 06:57:52.030323  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:52.531214  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.031113  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:53.531050  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.030867  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:54.531128  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.030521  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:55.530702  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.030762  566681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:57:56.196068  566681 kubeadm.go:1113] duration metric: took 4.337043927s to wait for elevateKubeSystemPrivileges
	I1002 06:57:56.196100  566681 kubeadm.go:402] duration metric: took 16.3277794s to StartCluster
	I1002 06:57:56.196121  566681 settings.go:142] acquiring lock: {Name:mkde88de9cc28e670cb4891970fce50579712197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.196294  566681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:56.196768  566681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/kubeconfig: {Name:mkaba69145ae0ebd7ee7f396e649d41ddd82691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:57:56.197012  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:57:56.197039  566681 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:57:56.197157  566681 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:57:56.197305  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.197326  566681 addons.go:69] Setting ingress=true in profile "addons-535714"
	I1002 06:57:56.197323  566681 addons.go:69] Setting default-storageclass=true in profile "addons-535714"
	I1002 06:57:56.197353  566681 addons.go:238] Setting addon ingress=true in "addons-535714"
	I1002 06:57:56.197360  566681 addons.go:69] Setting registry=true in profile "addons-535714"
	I1002 06:57:56.197367  566681 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-535714"
	I1002 06:57:56.197376  566681 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-535714"
	I1002 06:57:56.197382  566681 addons.go:69] Setting volumesnapshots=true in profile "addons-535714"
	I1002 06:57:56.197391  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-535714"
	I1002 06:57:56.197393  566681 addons.go:69] Setting ingress-dns=true in profile "addons-535714"
	I1002 06:57:56.197397  566681 addons.go:238] Setting addon volumesnapshots=true in "addons-535714"
	I1002 06:57:56.197403  566681 addons.go:238] Setting addon ingress-dns=true in "addons-535714"
	I1002 06:57:56.197413  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197417  566681 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:57:56.197432  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197438  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197454  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197317  566681 addons.go:69] Setting gcp-auth=true in profile "addons-535714"
	I1002 06:57:56.197804  566681 addons.go:69] Setting metrics-server=true in profile "addons-535714"
	I1002 06:57:56.197813  566681 mustload.go:65] Loading cluster: addons-535714
	I1002 06:57:56.197822  566681 addons.go:238] Setting addon metrics-server=true in "addons-535714"
	I1002 06:57:56.197849  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.197953  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197985  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.197348  566681 addons.go:69] Setting cloud-spanner=true in profile "addons-535714"
	I1002 06:57:56.197995  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198002  566681 config.go:182] Loaded profile config "addons-535714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:57:56.198025  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198027  566681 addons.go:69] Setting inspektor-gadget=true in profile "addons-535714"
	I1002 06:57:56.198034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198040  566681 addons.go:238] Setting addon inspektor-gadget=true in "addons-535714"
	I1002 06:57:56.198051  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198062  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198075  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198080  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198105  566681 addons.go:69] Setting volcano=true in profile "addons-535714"
	I1002 06:57:56.198115  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198118  566681 addons.go:238] Setting addon volcano=true in "addons-535714"
	I1002 06:57:56.198121  566681 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198148  566681 addons.go:69] Setting registry-creds=true in profile "addons-535714"
	I1002 06:57:56.198149  566681 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-535714"
	I1002 06:57:56.198007  566681 addons.go:238] Setting addon cloud-spanner=true in "addons-535714"
	I1002 06:57:56.197369  566681 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-535714"
	I1002 06:57:56.198159  566681 addons.go:238] Setting addon registry-creds=true in "addons-535714"
	I1002 06:57:56.197383  566681 addons.go:238] Setting addon registry=true in "addons-535714"
	I1002 06:57:56.198168  566681 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-535714"
	I1002 06:57:56.197305  566681 addons.go:69] Setting yakd=true in profile "addons-535714"
	I1002 06:57:56.198174  566681 addons.go:69] Setting storage-provisioner=true in profile "addons-535714"
	I1002 06:57:56.198182  566681 addons.go:238] Setting addon yakd=true in "addons-535714"
	I1002 06:57:56.198188  566681 addons.go:238] Setting addon storage-provisioner=true in "addons-535714"
	I1002 06:57:56.197356  566681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-535714"
	I1002 06:57:56.197990  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198337  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198362  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198371  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198392  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198402  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198453  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198563  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198685  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198716  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198796  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198823  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.198872  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.198882  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.198903  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.199225  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199278  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199496  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199602  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.199605  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.199635  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.200717  566681 out.go:179] * Verifying Kubernetes components...
	I1002 06:57:56.203661  566681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:57:56.205590  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205627  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.205734  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.205767  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.207434  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.207479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.210405  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.210443  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.213438  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.213479  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.214017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.214056  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.232071  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I1002 06:57:56.233110  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.234209  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.234234  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.234937  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.236013  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.236165  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.237450  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I1002 06:57:56.239323  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I1002 06:57:56.239414  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I1002 06:57:56.240034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240196  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.240748  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.240776  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.240868  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I1002 06:57:56.240881  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.241379  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.241396  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.241535  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.242519  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.242540  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.242696  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.242735  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.242850  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.243325  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I1002 06:57:56.243893  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.243945  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.244617  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.244654  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.245057  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.245890  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.245907  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.246010  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I1002 06:57:56.246033  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I1002 06:57:56.246568  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.247024  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.247099  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.247133  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.247421  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I1002 06:57:56.247710  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.247729  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.248188  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.248445  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.249846  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.250467  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.251029  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251054  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.251579  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.251601  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.252078  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.252654  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.252734  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.255593  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.255986  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.256022  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.257178  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.257900  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.257951  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.258275  566681 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-535714"
	I1002 06:57:56.259770  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.259874  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.260317  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.260360  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.260738  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.260770  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.261307  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.261989  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.262034  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.263359  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I1002 06:57:56.263562  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I1002 06:57:56.264010  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.264539  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.264559  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.265015  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.265220  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.268199  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38901
	I1002 06:57:56.268835  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.269385  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.269407  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.269800  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.272103  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.272173  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.272820  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.274630  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1002 06:57:56.275810  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I1002 06:57:56.275999  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I1002 06:57:56.276099  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I1002 06:57:56.276317  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I1002 06:57:56.276957  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277804  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.277826  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.277935  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.277992  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.279294  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.279318  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.279418  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279522  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I1002 06:57:56.279526  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.279724  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.280424  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280801  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.280956  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:57:56.280961  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281067  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281080  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281248  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.281259  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281396  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.280977  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.281804  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.281870  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.282274  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282869  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.282901  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.282927  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.282975  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.283442  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:57:56.284009  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.284202  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.284751  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:56.284768  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:57:56.284787  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.284857  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.284890  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.285017  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.285054  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.288207  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.289274  566681 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:57:56.289290  566681 addons.go:238] Setting addon default-storageclass=true in "addons-535714"
	I1002 06:57:56.289364  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:57:56.289753  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.289797  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290034  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290042  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1002 06:57:56.290151  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.290556  566681 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:56.290578  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.290579  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:57:56.290609  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.290771  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.290990  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.291089  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I1002 06:57:56.291362  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291376  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.291505  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.291516  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.292055  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.293244  566681 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:57:56.294939  566681 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:56.294996  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:57:56.295277  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.296317  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.296363  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.296433  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I1002 06:57:56.297190  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.297368  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.300772  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.300866  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.300946  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.300966  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.300983  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301003  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.301026  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.301076  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I1002 06:57:56.301165  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.301203  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.301228  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I1002 06:57:56.301400  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.301411  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.301454  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.301467  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.303443  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.303720  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.303466  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.303491  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.303762  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.303770  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:57:56.303776  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:57:56.303526  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.303632  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.304435  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.304932  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.305291  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.305345  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.305464  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:57:56.305492  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1002 06:57:56.305495  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:57:56.305508  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:57:56.305577  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.305592  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	W1002 06:57:56.305630  566681 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 06:57:56.306621  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307189  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.307311  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.307383  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.307409  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.307505  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.307540  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.307955  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.307981  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.308071  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.308163  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.308587  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.309033  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309057  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309132  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309293  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.309302  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.309314  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.309372  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.309533  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309698  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.309703  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.309839  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310208  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.310523  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.311044  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.311749  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.313557  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.316426  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I1002 06:57:56.319293  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I1002 06:57:56.319454  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.319564  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I1002 06:57:56.319675  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I1002 06:57:56.319683  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.319813  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.320386  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.320405  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.320695  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.320492  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.321204  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.321258  566681 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:57:56.321684  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.321443  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I1002 06:57:56.321593  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.321816  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.322144  566681 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:57:56.322156  566681 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:57:56.323037  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.323050  566681 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:56.323066  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:57:56.323087  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.323146  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.323323  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.323337  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.324564  566681 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:56.324583  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:57:56.324603  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.324892  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325026  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.325041  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.325304  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I1002 06:57:56.325602  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.325730  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.325892  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326132  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.326261  566681 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:57:56.327284  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.327472  566681 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:57:56.327597  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:57:56.327623  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.328569  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.328642  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.328661  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329119  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.329383  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.329634  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.329665  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.329932  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.330003  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.331010  566681 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:57:56.331650  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332245  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:57:56.332277  566681 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:57:56.332261  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.332297  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.332372  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.333369  566681 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:57:56.333621  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:57:56.333646  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.333810  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:57:56.334276  566681 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:57:56.334843  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.335194  566681 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:56.335210  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:57:56.335228  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.335446  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.335655  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I1002 06:57:56.335851  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.336132  566681 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:57:56.336170  566681 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:57:56.336280  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.336440  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1002 06:57:56.336618  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.337098  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.338250  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.338315  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.338584  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.338676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.338709  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.338721  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.339313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.339382  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.339452  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.339507  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.340336  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.340677  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.340657  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341043  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.341288  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.341796  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.341865  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342040  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.342263  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342431  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.342440  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342454  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342502  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342595  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.342614  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.342621  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.342695  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.342072  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343379  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.343750  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.343817  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.343832  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344313  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.344562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.344702  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.344753  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.344946  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.345322  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.345404  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345404  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.345548  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.345606  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.345806  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.346007  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.346320  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.346590  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.346862  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1002 06:57:56.347602  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.347914  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.348757  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.348800  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.349261  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.349633  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.349706  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.350337  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:57:56.351587  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:57:56.351643  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.351655  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:57:56.352903  566681 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:57:56.352987  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:57:56.353046  566681 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:57:56.353092  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.352987  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:57:56.353974  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I1002 06:57:56.354300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I1002 06:57:56.354530  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:57:56.354545  566681 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:57:56.354562  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.354607  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355031  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.355314  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355362  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355747  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.355869  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:57:56.355907  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.355921  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.355982  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.356446  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.356686  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.358485  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:57:56.359466  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.359801  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360238  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360272  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.360643  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360654  566681 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:56.360667  566681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:57:56.360676  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.360684  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.360847  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.360902  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.360949  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.361063  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361261  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.361264  566681 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:57:56.361278  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.361264  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:57:56.361448  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.361531  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.361713  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.362047  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.363668  566681 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:57:56.363670  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:57:56.364768  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365172  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:56.365189  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:57:56.365208  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.365463  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.365492  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.365867  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.366200  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.366332  566681 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:57:56.366394  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.366567  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.367647  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:57:56.367669  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:57:56.367689  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.369424  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370073  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.370181  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.370353  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.370354  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1002 06:57:56.370539  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.370710  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.370855  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.371120  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:57:56.371862  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:57:56.371993  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:57:56.372440  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:57:56.372590  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.372646  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:57:56.373687  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.373711  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.373884  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.374060  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.374270  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.374438  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:57:56.374887  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:57:56.376513  566681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:57:56.377878  566681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:56.377895  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:57:56.377926  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:57:56.381301  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.381862  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:57:56.381898  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:57:56.382058  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:57:56.382245  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:57:56.382379  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:57:56.382525  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	W1002 06:57:56.611250  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611293  566681 retry.go:31] will retry after 268.923212ms: ssh: handshake failed: read tcp 192.168.39.1:41640->192.168.39.164:22: read: connection reset by peer
	W1002 06:57:56.611372  566681 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:56.611378  566681 retry.go:31] will retry after 284.79555ms: ssh: handshake failed: read tcp 192.168.39.1:41654->192.168.39.164:22: read: connection reset by peer
	I1002 06:57:57.238066  566681 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:57:57.238093  566681 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:57:57.274258  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:57:57.291447  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:57:57.296644  566681 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:57.296665  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:57:57.317724  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:57:57.326760  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:57:57.326790  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:57:57.344388  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:57:57.359635  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:57:57.359666  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:57:57.391219  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:57:57.397913  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:57:57.466213  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:57:57.539770  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:57:57.539800  566681 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:57:57.565073  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:57:57.565109  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:57:57.626622  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.42956155s)
	I1002 06:57:57.626664  566681 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.422968545s)
	I1002 06:57:57.626751  566681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:57:57.626829  566681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:57:57.788309  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:57:57.788340  566681 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:57:57.863163  566681 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:57.863190  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:57:57.896903  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:57:57.896955  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:57:57.923302  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:57:58.011690  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:57:58.012981  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:57:58.110306  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:57:58.110346  566681 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:57:58.142428  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:57:58.142456  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:57:58.216082  566681 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:57:58.216112  566681 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:57:58.218768  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:57:58.222643  566681 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.222669  566681 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:57:58.429860  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:57:58.429897  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:57:58.485954  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:57:58.485995  566681 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:57:58.501916  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:57:58.501955  566681 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:57:58.521314  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:57:58.818318  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:57:58.818357  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:57:58.833980  566681 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:58.834010  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:57:58.873392  566681 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:58.873431  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:57:59.176797  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:57:59.186761  566681 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:57:59.186798  566681 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:57:59.305759  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:57:59.719259  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:57:59.719285  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 06:58:00.188246  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:58:00.188281  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:58:00.481133  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.20682266s)
	I1002 06:58:00.481238  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.481605  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.481667  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.481693  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.481705  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:00.481717  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:00.482053  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:00.482070  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:00.482081  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:00.644178  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:58:00.644209  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:58:01.086809  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:58:01.086834  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:58:01.452986  566681 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:01.453026  566681 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:58:02.150700  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:58:02.601667  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.310178549s)
	I1002 06:58:02.601725  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.28395893s)
	I1002 06:58:02.601734  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601747  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601765  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601795  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601869  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.25743101s)
	I1002 06:58:02.601905  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601924  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.601917  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.210665802s)
	I1002 06:58:02.601951  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.601961  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602046  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602055  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602062  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602178  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602365  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602381  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602379  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602385  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602399  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602401  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602410  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602351  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602416  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602424  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602390  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602460  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602330  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602552  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602560  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:02.602566  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:02.602767  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602847  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.602996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.603001  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603018  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:02.602869  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:02.602869  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:02.603276  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:03.763895  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:58:03.763944  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:03.767733  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768302  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:03.768333  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:03.768654  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:03.768868  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:03.769064  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:03.769213  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:04.277228  566681 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:58:04.505226  566681 addons.go:238] Setting addon gcp-auth=true in "addons-535714"
	I1002 06:58:04.505305  566681 host.go:66] Checking if "addons-535714" exists ...
	I1002 06:58:04.505781  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.505848  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.521300  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I1002 06:58:04.521841  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.522464  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.522494  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.522889  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.523576  566681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 06:58:04.523636  566681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 06:58:04.537716  566681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1002 06:58:04.538258  566681 main.go:141] libmachine: () Calling .GetVersion
	I1002 06:58:04.538728  566681 main.go:141] libmachine: Using API Version  1
	I1002 06:58:04.538756  566681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 06:58:04.539153  566681 main.go:141] libmachine: () Calling .GetMachineName
	I1002 06:58:04.539385  566681 main.go:141] libmachine: (addons-535714) Calling .GetState
	I1002 06:58:04.541614  566681 main.go:141] libmachine: (addons-535714) Calling .DriverName
	I1002 06:58:04.541849  566681 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:58:04.541880  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHHostname
	I1002 06:58:04.545872  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546401  566681 main.go:141] libmachine: (addons-535714) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:74:bc", ip: ""} in network mk-addons-535714: {Iface:virbr1 ExpiryTime:2025-10-02 07:57:29 +0000 UTC Type:0 Mac:52:54:00:00:74:bc Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:addons-535714 Clientid:01:52:54:00:00:74:bc}
	I1002 06:58:04.546429  566681 main.go:141] libmachine: (addons-535714) DBG | domain addons-535714 has defined IP address 192.168.39.164 and MAC address 52:54:00:00:74:bc in network mk-addons-535714
	I1002 06:58:04.546708  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHPort
	I1002 06:58:04.546895  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHKeyPath
	I1002 06:58:04.547027  566681 main.go:141] libmachine: (addons-535714) Calling .GetSSHUsername
	I1002 06:58:04.547194  566681 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/addons-535714/id_rsa Username:docker}
	I1002 06:58:05.770941  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.372950609s)
	I1002 06:58:05.771023  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771039  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771065  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.304816797s)
	I1002 06:58:05.771113  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771131  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771178  566681 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.1443973s)
	I1002 06:58:05.771222  566681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.144363906s)
	I1002 06:58:05.771258  566681 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 06:58:05.771308  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.847977896s)
	W1002 06:58:05.771333  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771355  566681 retry.go:31] will retry after 297.892327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:05.771456  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758443398s)
	I1002 06:58:05.771481  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771490  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771540  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.759815099s)
	I1002 06:58:05.771573  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771575  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.552784974s)
	I1002 06:58:05.771584  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771595  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771611  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771719  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.250362363s)
	I1002 06:58:05.771747  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.771759  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.771942  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.771963  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772013  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772022  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772032  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772030  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772040  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772044  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772052  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772059  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772194  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772224  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772230  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772248  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772255  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772485  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772523  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772532  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772541  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772549  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772589  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772628  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.772636  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772645  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.772653  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772709  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772796  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773193  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.773210  566681 addons.go:479] Verifying addon registry=true in "addons-535714"
	I1002 06:58:05.773744  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.773810  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.773834  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774038  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.774118  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774129  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.772818  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.772841  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.774925  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.774937  566681 addons.go:479] Verifying addon ingress=true in "addons-535714"
	I1002 06:58:05.772862  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775004  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775017  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.775024  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.772880  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775347  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.775380  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.775386  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.775394  566681 addons.go:479] Verifying addon metrics-server=true in "addons-535714"
	I1002 06:58:05.776348  566681 node_ready.go:35] waiting up to 6m0s for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.776980  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.776996  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:05.776998  566681 out.go:179] * Verifying registry addon...
	I1002 06:58:05.779968  566681 out.go:179] * Verifying ingress addon...
	I1002 06:58:05.780767  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:58:05.782010  566681 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:58:05.829095  566681 node_ready.go:49] node "addons-535714" is "Ready"
	I1002 06:58:05.829146  566681 node_ready.go:38] duration metric: took 52.75602ms for node "addons-535714" to be "Ready" ...
	I1002 06:58:05.829168  566681 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:58:05.829233  566681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:58:05.834443  566681 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:58:05.834466  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:05.835080  566681 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:58:05.835100  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:05.875341  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.875368  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.875751  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.875763  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.875778  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:58:05.875878  566681 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 06:58:05.909868  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:05.909898  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:05.910207  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:05.910270  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:05.910287  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.069811  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:06.216033  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.039174172s)
	W1002 06:58:06.216104  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216108  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.910297192s)
	I1002 06:58:06.216150  566681 retry.go:31] will retry after 161.340324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:58:06.216192  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216210  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216504  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216542  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216549  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.216557  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:06.216563  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:06.216800  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:06.216843  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:06.216850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:06.218514  566681 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-535714 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:58:06.294875  566681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-535714" context rescaled to 1 replicas
	I1002 06:58:06.324438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:06.327459  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.377937  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:58:06.794270  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:06.798170  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.296006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.297921  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.825812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:07.825866  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:07.904551  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.753782282s)
	I1002 06:58:07.904616  566681 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.362740219s)
	I1002 06:58:07.904661  566681 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.075410022s)
	I1002 06:58:07.904685  566681 api_server.go:72] duration metric: took 11.707614799s to wait for apiserver process to appear ...
	I1002 06:58:07.904692  566681 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:58:07.904618  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.904714  566681 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I1002 06:58:07.904746  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905650  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.905668  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.905673  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:58:07.905682  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:07.905697  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:07.905988  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:07.906010  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:07.906023  566681 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-535714"
	I1002 06:58:07.917720  566681 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:58:07.917721  566681 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:58:07.919394  566681 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:58:07.920319  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:58:07.920611  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:58:07.920631  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:58:07.923712  566681 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I1002 06:58:07.935689  566681 api_server.go:141] control plane version: v1.34.1
	I1002 06:58:07.935726  566681 api_server.go:131] duration metric: took 31.026039ms to wait for apiserver health ...
	I1002 06:58:07.935739  566681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:58:07.938642  566681 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:58:07.938662  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:07.962863  566681 system_pods.go:59] 20 kube-system pods found
	I1002 06:58:07.962924  566681 system_pods.go:61] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:07.962934  566681 system_pods.go:61] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962944  566681 system_pods.go:61] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:07.962951  566681 system_pods.go:61] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:07.962955  566681 system_pods.go:61] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:07.962959  566681 system_pods.go:61] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending
	I1002 06:58:07.962962  566681 system_pods.go:61] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:07.962965  566681 system_pods.go:61] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:07.962968  566681 system_pods.go:61] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:07.962973  566681 system_pods.go:61] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:07.962979  566681 system_pods.go:61] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:07.962983  566681 system_pods.go:61] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:07.962988  566681 system_pods.go:61] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:07.962994  566681 system_pods.go:61] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:07.962999  566681 system_pods.go:61] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:07.963005  566681 system_pods.go:61] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:07.963017  566681 system_pods.go:61] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:07.963022  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963031  566681 system_pods.go:61] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:07.963036  566681 system_pods.go:61] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:07.963048  566681 system_pods.go:74] duration metric: took 27.298515ms to wait for pod list to return data ...
	I1002 06:58:07.963061  566681 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:58:07.979696  566681 default_sa.go:45] found service account: "default"
	I1002 06:58:07.979723  566681 default_sa.go:55] duration metric: took 16.655591ms for default service account to be created ...
	I1002 06:58:07.979733  566681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:58:08.050371  566681 system_pods.go:86] 20 kube-system pods found
	I1002 06:58:08.050407  566681 system_pods.go:89] "amd-gpu-device-plugin-f7qcs" [789f2b98-37d8-40b1-9d96-0943237a099a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 06:58:08.050415  566681 system_pods.go:89] "coredns-66bc5c9577-6v7pj" [edf53945-e6e1-4a19-a443-bfb4d2ea2097] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050424  566681 system_pods.go:89] "coredns-66bc5c9577-w7hjm" [df6c56bd-f409-4243-8017-c7b13bcd2610] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:58:08.050430  566681 system_pods.go:89] "csi-hostpath-attacher-0" [27de7994-2f0d-4f74-a4f7-7e22d4971553] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:58:08.050438  566681 system_pods.go:89] "csi-hostpath-resizer-0" [1a933762-fa4f-4072-8b4b-d8b6c46d4f7e] Pending
	I1002 06:58:08.050443  566681 system_pods.go:89] "csi-hostpathplugin-8sjk8" [914e6ab5-a344-4664-a33a-b4909c1b7903] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:58:08.050449  566681 system_pods.go:89] "etcd-addons-535714" [b6c13570-2725-441a-bb01-88f51897ae55] Running
	I1002 06:58:08.050456  566681 system_pods.go:89] "kube-apiserver-addons-535714" [5bc781de-e350-46bb-8c3e-c1d575ba58d8] Running
	I1002 06:58:08.050463  566681 system_pods.go:89] "kube-controller-manager-addons-535714" [6e426a3d-8271-4e51-9e94-b2098f6e9fae] Running
	I1002 06:58:08.050472  566681 system_pods.go:89] "kube-ingress-dns-minikube" [0db8a359-0034-4d93-9741-a13248109f50] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:58:08.050477  566681 system_pods.go:89] "kube-proxy-z495t" [ff433508-be20-4930-a1bf-51f227b0c22a] Running
	I1002 06:58:08.050485  566681 system_pods.go:89] "kube-scheduler-addons-535714" [2d4d100d-c66b-4279-aad5-32c2ec80b7c2] Running
	I1002 06:58:08.050493  566681 system_pods.go:89] "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:58:08.050504  566681 system_pods.go:89] "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:58:08.050512  566681 system_pods.go:89] "registry-66898fdd98-rc8tq" [664b0bff-06c4-43b6-8e54-2664c0dcad56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:58:08.050523  566681 system_pods.go:89] "registry-creds-764b6fb674-ck8xq" [fbbe80b8-209e-480d-b2e3-98a5d6c54c27] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:58:08.050528  566681 system_pods.go:89] "registry-proxy-d9npj" [542f8fb1-6b0c-47b2-89ff-4dc935710130] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:58:08.050537  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g4hd4" [f552d1e8-79a8-4bf6-be47-26aa19781b53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050542  566681 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knwl8" [bcee0c5b-2829-4ba3-82ad-31430c403352] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:58:08.050551  566681 system_pods.go:89] "storage-provisioner" [e38a8c17-a75a-460e-bf52-2fc7f98d9595] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:58:08.050567  566681 system_pods.go:126] duration metric: took 70.827007ms to wait for k8s-apps to be running ...
	I1002 06:58:08.050583  566681 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:58:08.050638  566681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:58:08.169874  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:58:08.169907  566681 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:58:08.289577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.292025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.296361  566681 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.296391  566681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:58:08.432642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:08.459596  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:58:08.795545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:08.796983  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:08.947651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.295174  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.296291  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.426575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:09.794891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:09.794937  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:09.929559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.288382  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.293181  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.428326  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.511821  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.441960114s)
	W1002 06:58:10.511871  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511903  566681 retry.go:31] will retry after 394.105371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:10.511999  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.133998235s)
	I1002 06:58:10.512065  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512084  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512009  566681 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.461351775s)
	I1002 06:58:10.512151  566681 system_svc.go:56] duration metric: took 2.461548607s WaitForService to wait for kubelet
	I1002 06:58:10.512170  566681 kubeadm.go:586] duration metric: took 14.315097833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:58:10.512195  566681 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:58:10.512421  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512436  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.512445  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.512451  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.512808  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.512831  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.525421  566681 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 06:58:10.525467  566681 node_conditions.go:123] node cpu capacity is 2
	I1002 06:58:10.525483  566681 node_conditions.go:105] duration metric: took 13.282233ms to run NodePressure ...
	I1002 06:58:10.525500  566681 start.go:241] waiting for startup goroutines ...
	I1002 06:58:10.876948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:10.878962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:10.907099  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:10.933831  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.474178987s)
	I1002 06:58:10.933902  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.933917  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934327  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934351  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.934363  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:58:10.934372  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:58:10.934718  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:58:10.934741  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:58:10.936073  566681 addons.go:479] Verifying addon gcp-auth=true in "addons-535714"
	I1002 06:58:10.939294  566681 out.go:179] * Verifying gcp-auth addon...
	I1002 06:58:10.941498  566681 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:58:10.967193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:10.967643  566681 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:58:10.967661  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.291995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.292859  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.426822  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.449596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:11.787220  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:11.790007  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:11.927177  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:11.946352  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.291330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.291893  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.412988  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.505843996s)
	W1002 06:58:12.413060  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.413088  566681 retry.go:31] will retry after 830.72209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:12.425033  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.449434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:12.790923  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:12.792837  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:12.929132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:12.949344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.244514  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:13.289311  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.291334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.429008  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.453075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:13.786448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:13.787372  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:13.926128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:13.944808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.290787  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.291973  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.426597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.446124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:14.495404  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.250841467s)
	W1002 06:58:14.495476  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.495515  566681 retry.go:31] will retry after 993.52867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:14.787133  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:14.787363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:14.925480  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:14.947120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.288745  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.290247  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.426491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.446707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:15.489998  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:15.790203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:15.790718  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:15.926338  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:15.947762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.288050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.294216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.426315  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.448623  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:16.749674  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.259622296s)
	W1002 06:58:16.749739  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.749766  566681 retry.go:31] will retry after 685.893269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:16.784937  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:16.789418  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:16.924303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:16.945254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.286582  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.289258  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.429493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.436551  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:17.446130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:17.789304  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:17.789354  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:17.927192  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:17.947272  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.287684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.287964  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.425334  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.446542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:18.793984  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.357370737s)
	W1002 06:58:18.794035  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.794058  566681 retry.go:31] will retry after 1.769505645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:18.818834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:18.819319  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:18.926250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:18.946166  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.286120  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.287299  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.427368  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.446296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:19.788860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:19.790575  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:19.926266  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:19.946838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.285631  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.286287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.426458  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.448700  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:20.563743  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:20.784983  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:20.792452  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:20.928439  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:20.946213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.354534  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.355101  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.424438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.447780  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.787792  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:21.788239  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:21.926313  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:21.946909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:21.986148  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.422343909s)
	W1002 06:58:21.986215  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:21.986241  566681 retry.go:31] will retry after 1.591159568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:22.479105  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.490010  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.490062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:22.490154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.785438  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:22.785505  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:22.924097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:22.945260  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.287691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.288324  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.424675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.444770  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:23.578011  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:23.942123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:23.948294  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:23.948453  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:23.950791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.287641  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.287755  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.427062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.445753  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:24.646106  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.068053257s)
	W1002 06:58:24.646165  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.646192  566681 retry.go:31] will retry after 2.605552754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:24.785021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:24.786706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:24.924880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:24.945307  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.293097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.295253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.426401  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.448785  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:25.786965  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:25.789832  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:25.926383  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:25.947419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.286815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.424942  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.444763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:26.788540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:26.788706  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:26.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:26.945896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.252378  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:27.285347  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.286330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.426765  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.444675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:27.783930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:27.785939  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:27.925152  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:27.946794  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:27.992201  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:27.992240  566681 retry.go:31] will retry after 8.383284602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:28.292474  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.293236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.449878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:28.785825  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:28.786277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:28.930557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:28.944934  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.288741  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.289425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.425596  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.448825  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:29.791293  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:29.791772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:29.925493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:29.947040  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.289093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.289274  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.429043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.445086  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:30.787343  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:30.788106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:30.925916  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:30.945578  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.287772  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.288130  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.424173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.444911  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:31.839251  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:31.839613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:31.924537  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:31.945244  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.285593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.287197  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.428173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.445646  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:32.790722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:32.792545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:32.924044  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:32.948465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.287477  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.287815  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.426173  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.445002  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:33.789091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:33.789248  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:33.926672  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:33.945340  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.287879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.291550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.424476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.446160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:34.790769  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:34.793072  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:34.924896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:34.945667  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.523723  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.524500  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:35.524737  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.525162  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.790230  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:35.791831  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:35.924241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:35.944951  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.289627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.289977  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.375684  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:36.425592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.451074  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:36.785903  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:36.787679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:36.925288  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:36.947999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.311635  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.311959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.426029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.446091  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:37.636801  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.261070571s)
	W1002 06:58:37.636852  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.636877  566681 retry.go:31] will retry after 12.088306464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:37.784365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:37.786077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:37.924729  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:37.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.287422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.288052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.424776  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.446043  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:38.787364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:38.788336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:38.929977  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:38.952669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.285777  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.286130  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.425664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.445359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:39.791043  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:39.792332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:39.927261  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:39.949133  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.297847  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.298155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.508411  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:40.508530  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.790869  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:40.791640  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:40.926541  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:40.946409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.284335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.288282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.425342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.445476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:41.786456  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:41.787369  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:41.925788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:41.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.285122  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.289954  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.427812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.448669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:42.789086  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:42.793784  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:42.981476  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:42.983793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.287301  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.287653  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.425089  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.446115  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:43.788762  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:43.788804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:43.925841  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:43.946154  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.291446  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.291561  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.424642  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.445497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:44.784807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:44.785666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:44.924223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:44.945793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.287330  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.288804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.425720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.445387  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:45.784761  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:45.787219  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:45.925198  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:45.945101  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.287324  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.287453  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.425817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:46.785000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:46.786016  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:46.924786  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:46.944720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.284615  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.286350  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.424772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.444696  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:47.784801  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:47.786247  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:47.924675  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:47.945863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.285254  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.286071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.424850  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.444546  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:48.784736  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:48.787062  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:48.924609  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:48.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.285611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.286827  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.424821  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.444716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:49.726164  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:58:49.787775  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:49.787812  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:49.924332  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:49.945915  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:50.285693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.287323  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.425093  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:58:50.457717  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.457755  566681 retry.go:31] will retry after 14.401076568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:58:50.785374  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:50.786592  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:50.924494  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:50.946113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.285309  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.424519  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.446358  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:51.785764  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:51.787620  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:51.924671  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:51.945518  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.284608  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.286328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.426252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.444955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:52.785415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:52.786501  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:52.924360  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:52.945603  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.286059  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.286081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.426061  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.445434  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:53.784563  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:53.787018  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:53.926712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:53.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.285670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.286270  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.425263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.445015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:54.783971  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:54.785518  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:54.924652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:54.944701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.284095  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.285982  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.425045  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.445159  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:55.784789  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:55.785811  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:55.925024  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:55.945670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.284935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.286230  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.424865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.444979  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:56.784010  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:56.785095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:56.925082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:56.945267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.285037  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.290841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.423992  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.444492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:57.785708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:57.786647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:57.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:57.944543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.284397  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.286589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.424263  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.446278  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:58.784592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:58.786223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:58.925275  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:58.945639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.284167  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.286213  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.424554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.446331  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:58:59.786351  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:58:59.786532  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:58:59.924799  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:58:59.944552  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.284593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.286147  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.427708  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.446640  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:00.783993  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:00.786195  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:00.925109  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:00.945645  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.284268  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.286567  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.425880  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.444926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:01.784751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:01.786669  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:01.924082  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:01.945409  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.285484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.287955  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.424588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.445328  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:02.785933  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:02.786611  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:02.924311  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:02.945554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.284664  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.286758  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.424558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.445443  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:03.785718  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:03.786015  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:03.924950  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:03.945320  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.285692  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.287456  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.423909  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.445028  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:04.784417  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:04.785847  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:04.859977  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:04.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:04.944867  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:05.286410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.424815  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.444725  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:59:05.565727  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.565775  566681 retry.go:31] will retry after 12.962063584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:05.784083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:05.785399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:05.924301  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:05.945548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.284341  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.285025  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.424577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.445930  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:06.785592  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:06.785777  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:06.924651  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:06.944548  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.284807  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.286980  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.424593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.445604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:07.785681  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:07.786565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:07.924412  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:07.945298  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.284890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.285768  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.424422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.446875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:08.784632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:08.786747  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:08.924452  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:08.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.284701  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.286699  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.424832  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.445005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:09.785080  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:09.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:09.923720  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:09.944468  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.285848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.285877  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.425574  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.445229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:10.785800  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:10.788069  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:10.924958  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:10.945132  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.284817  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.286986  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.424693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.444335  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:11.786755  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:11.788412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:11.924402  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:11.944935  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.285499  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.285734  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.424709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.445959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:12.785549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:12.788041  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:12.924691  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:12.944292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.285346  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.285683  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.424754  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.445585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:13.784745  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:13.786053  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:13.925403  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:13.945860  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.285184  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.286959  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.424804  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.446097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:14.791558  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:14.791556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:14.927542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:14.949956  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.284639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.286617  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.426580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.446175  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:15.784496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:15.787071  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:15.925830  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:15.945618  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.286160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:59:16.287392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.424973  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.446497  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:16.789545  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:16.790116  566681 kapi.go:107] duration metric: took 1m11.009348953s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 06:59:16.925187  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:16.947267  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.287647  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.426165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:17.450844  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:17.786988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:17.928406  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.027597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.293020  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.429378  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.449227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:18.528488  566681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:59:18.796448  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:18.929553  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:18.946292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.288404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.429199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.452666  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:19.792639  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:19.864991  566681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.336449949s)
	W1002 06:59:19.865069  566681 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:59:19.865160  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865179  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865541  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865566  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 06:59:19.865575  566681 main.go:141] libmachine: Making call to close driver server
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) Calling .Close
	I1002 06:59:19.865582  566681 main.go:141] libmachine: (addons-535714) DBG | Closing plugin on server side
	I1002 06:59:19.865834  566681 main.go:141] libmachine: Successfully made call to close driver server
	I1002 06:59:19.865850  566681 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 06:59:19.865969  566681 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:59:19.924481  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:19.945058  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.286730  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.424767  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.445496  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:20.787056  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:20.925303  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:20.945594  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.285610  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.424114  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.445438  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:21.786589  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:21.924253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:21.944783  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.285375  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.424724  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.445811  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:22.828328  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:22.929492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:22.945629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.286455  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.424116  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.444871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:23.785953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:23.924350  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:23.945321  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.286907  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.424613  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.445706  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:24.786265  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:24.925165  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:24.944432  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.286899  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.424337  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.445373  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:25.786646  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:25.924121  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:25.944695  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.286707  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.425250  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.445323  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:26.786287  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:26.926069  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:26.945489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.286403  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.424957  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.445376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:27.786820  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:27.924170  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:27.945197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.424241  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.445542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:28.786419  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:28.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:28.945740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.286366  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.425536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.445687  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:29.788123  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:29.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:29.944760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.285395  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.425015  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.445071  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:30.786362  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:30.925693  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:30.945540  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.286268  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.424296  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.446123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:31.786155  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:31.926684  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:31.945375  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.286413  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.424180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.444838  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:32.786253  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:32.925151  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:32.944944  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.288748  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.425620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.445650  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:33.786358  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:33.924738  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:33.944757  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.285092  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.424998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.445067  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:34.786516  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:34.924306  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:34.945543  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.286428  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.423533  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.445039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:35.785517  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:35.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:35.944555  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.286468  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.424778  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.444808  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:36.785451  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:36.924018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:36.945516  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.287660  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.424005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.445419  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:37.785743  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:37.924870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:37.944575  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.286370  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.424689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.444639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:38.786644  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:38.928760  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:38.945529  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.286055  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.425011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.445046  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:39.787058  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:39.924829  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:39.944865  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.285681  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.424212  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.445570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:40.786536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:40.924039  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:40.945611  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.286872  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.425081  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.445160  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:41.785854  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:41.924803  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:41.945395  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.286806  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.424531  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.445213  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:42.785794  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:42.924199  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:42.946416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.287223  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.425005  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.445179  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:43.786152  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:43.924626  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:43.945545  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.286313  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.425004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.445925  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:44.786682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:44.924809  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:44.944902  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.286167  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.424932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.444879  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:45.785378  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:45.925864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:45.945123  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.286422  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.424954  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.445018  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:46.786489  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:46.924425  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:46.945064  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.286244  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.425181  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.445110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:47.785417  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:47.923870  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:47.944712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.287782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.424751  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.444542  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:48.786556  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:48.924410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:48.945514  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.286856  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.424634  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.444823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:49.786341  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:49.925249  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:49.945585  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.287532  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.427364  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.449565  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:50.787425  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:50.926679  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:50.947416  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.289682  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.428232  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.445465  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:51.787537  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:51.926415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:51.945253  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.285757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.424433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.448251  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:52.785971  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:52.928422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:52.946461  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.286536  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.427577  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.452271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:53.786128  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:53.926032  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:53.946426  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.287601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.424345  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.445705  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:54.787096  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:54.924759  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:54.946688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.290180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.519704  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:55.519891  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:55.926689  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:55.946557  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.286054  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.425914  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.447300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:56.785957  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:56.924030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:56.949871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.291565  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.428120  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.526092  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:57.786283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:57.933203  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:57.952823  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.290757  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:58.425788  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:58.445898  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:58.785286  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.135410  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.135484  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.289658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.424763  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.444901  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:59:59.789990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:59:59.927768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:59:59.950570  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.288666  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.424489  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.444995  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:00.785712  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:00.928193  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:00.945797  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.289874  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.429342  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.447102  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:01.787399  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:01.924633  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:01.944955  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.288296  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.432709  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.448119  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:02.788304  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:02.936551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:02.950283  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.291180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.429826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.446896  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:03.789649  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:03.930297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:03.947075  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.285728  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.423878  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.445021  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:04.785989  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:04.926604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:04.946365  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.289629  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.424560  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.446580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:05.786184  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:05.925038  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:05.945428  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.286414  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.425072  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.445415  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:06.786235  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:06.924932  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:06.945108  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.286318  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.425639  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.445791  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:07.787192  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:07.925722  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:07.945680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.286388  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.424699  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.445180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:08.786177  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:08.927180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:08.945006  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.285412  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.424690  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.444685  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:09.787988  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:09.926782  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:09.944680  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.286385  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.425422  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.445890  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:10.785391  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:10.925292  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:10.946110  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.286953  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.424926  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.445097  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:11.785990  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:11.925536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:11.945882  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.286095  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.426218  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.445400  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:12.787180  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:12.924959  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:12.945605  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.286936  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.424843  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.445297  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:13.786034  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:13.927087  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:13.945676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.286216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.424888  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.444768  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:14.785283  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:14.925300  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:14.945536  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.287658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.424359  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.445282  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:15.785834  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:15.924384  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:15.945604  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.286392  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.424670  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.445327  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:16.786482  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:16.924913  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:16.944676  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.286962  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.428554  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.445872  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:17.787125  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:17.924730  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:17.945508  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.286528  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.426864  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.444750  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:18.786434  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:18.926688  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:18.945265  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.286255  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.425491  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.446113  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:19.787657  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:19.925826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:19.946549  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.286336  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.424707  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.444772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:20.785404  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:20.925678  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:20.945252  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.285782  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.425487  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.447029  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:21.786550  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:21.923826  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:21.945389  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.288156  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.425586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.446602  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:22.787696  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:22.924004  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:22.945488  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.286521  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.424493  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.446224  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:23.786604  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:23.925118  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:23.945482  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.286583  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.424632  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.445848  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:24.785791  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:24.927001  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:24.944907  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.288049  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.424875  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.444559  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:25.786767  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:25.925226  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:25.945050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.285958  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.426083  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.444740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:26.787052  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:26.925376  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:26.945062  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.285717  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.424050  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.444966  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:27.787841  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:27.924740  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:27.945492  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.286484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.424236  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.445504  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:28.786601  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:28.924551  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:28.945948  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.288423  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.424871  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.445286  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:29.786695  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:29.926223  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:29.945407  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.286021  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.425588  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.445469  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:30.786883  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:30.926085  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:30.945814  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.287360  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.424981  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.445361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:31.787680  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:31.924556  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:31.945363  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.288077  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.425366  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.447433  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:32.847272  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:32.946629  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:32.946982  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.285658  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.424106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.445538  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:33.787044  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:33.927886  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:33.944580  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.290469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.425444  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.448620  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:34.789282  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:34.930009  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:34.948721  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.287469  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.432852  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.446652  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:35.788507  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:35.930180  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:35.954772  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.293484  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.435262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.449271  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:36.788843  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:36.928945  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:36.945831  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.288443  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.427657  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.447716  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:37.787995  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:37.933694  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:37.946106  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.287636  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.427229  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.446000  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:38.788221  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:38.925863  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:38.944669  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:39.286808  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:39.425719  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:39.446011  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.005533  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.011858  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.013227  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.289216  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.429330  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.446597  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:40.788887  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:40.934361  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:40.949590  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.288436  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.426586  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.446712  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:41.790082  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:41.926762  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:41.948030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.286904  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.428171  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.447262  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:42.787879  566681 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 07:00:42.928999  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:42.947900  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.289340  566681 kapi.go:107] duration metric: took 2m37.507327929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 07:00:43.426593  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.445627  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:43.927030  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:43.946124  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.426277  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.445511  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:44.928128  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:44.945892  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.424940  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.445245  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:45.925479  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:45.948084  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 07:00:46.427998  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:46.446348  566681 kapi.go:107] duration metric: took 2m35.504841728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 07:00:46.448361  566681 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-535714 cluster.
	I1002 07:00:46.449772  566681 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 07:00:46.451121  566681 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 07:00:46.925947  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.429007  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:47.927793  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.430587  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:48.930344  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.428197  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:49.928448  566681 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 07:00:50.425299  566681 kapi.go:107] duration metric: took 2m42.504972928s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 07:00:50.428467  566681 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, metrics-server, storage-provisioner, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 07:00:50.429978  566681 addons.go:514] duration metric: took 2m54.232824958s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin amd-gpu-device-plugin registry-creds metrics-server storage-provisioner storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 07:00:50.430050  566681 start.go:246] waiting for cluster config update ...
	I1002 07:00:50.430076  566681 start.go:255] writing updated cluster config ...
	I1002 07:00:50.430525  566681 ssh_runner.go:195] Run: rm -f paused
	I1002 07:00:50.439887  566681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:50.446240  566681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.451545  566681 pod_ready.go:94] pod "coredns-66bc5c9577-w7hjm" is "Ready"
	I1002 07:00:50.451589  566681 pod_ready.go:86] duration metric: took 5.295665ms for pod "coredns-66bc5c9577-w7hjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.454257  566681 pod_ready.go:83] waiting for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.459251  566681 pod_ready.go:94] pod "etcd-addons-535714" is "Ready"
	I1002 07:00:50.459291  566681 pod_ready.go:86] duration metric: took 4.998226ms for pod "etcd-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.463385  566681 pod_ready.go:83] waiting for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.473863  566681 pod_ready.go:94] pod "kube-apiserver-addons-535714" is "Ready"
	I1002 07:00:50.473899  566681 pod_ready.go:86] duration metric: took 10.481477ms for pod "kube-apiserver-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.478391  566681 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:50.845519  566681 pod_ready.go:94] pod "kube-controller-manager-addons-535714" is "Ready"
	I1002 07:00:50.845556  566681 pod_ready.go:86] duration metric: took 367.127625ms for pod "kube-controller-manager-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.046035  566681 pod_ready.go:83] waiting for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.445054  566681 pod_ready.go:94] pod "kube-proxy-z495t" is "Ready"
	I1002 07:00:51.445095  566681 pod_ready.go:86] duration metric: took 399.024039ms for pod "kube-proxy-z495t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:51.644949  566681 pod_ready.go:83] waiting for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045721  566681 pod_ready.go:94] pod "kube-scheduler-addons-535714" is "Ready"
	I1002 07:00:52.045756  566681 pod_ready.go:86] duration metric: took 400.769133ms for pod "kube-scheduler-addons-535714" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:00:52.045769  566681 pod_ready.go:40] duration metric: took 1.605821704s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:00:52.107681  566681 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 07:00:52.109482  566681 out.go:179] * Done! kubectl is now configured to use "addons-535714" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.647022048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388613646990081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=687a9f88-b846-468d-8f39-1d0892291830 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.648146158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b7a4675-7fc9-470c-b00a-22b70f45073d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.648216034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b7a4675-7fc9-470c-b00a-22b70f45073d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.648814285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labe
ls:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c
0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b7a4675-7fc9-470c-b00a-22b70f45073d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.698407323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e236c45-8a07-4cf6-a5c6-c0d5a48937af name=/runtime.v1.RuntimeService/Version
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.698626676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e236c45-8a07-4cf6-a5c6-c0d5a48937af name=/runtime.v1.RuntimeService/Version
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.699914286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae088917-6c65-4665-917d-07aa5f18d2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.701585718Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388613701554311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae088917-6c65-4665-917d-07aa5f18d2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.703159777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c432b0be-9757-431d-9f9e-4cc836614200 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.703403101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c432b0be-9757-431d-9f9e-4cc836614200 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.704120288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labe
ls:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c
0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c432b0be-9757-431d-9f9e-4cc836614200 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.747482361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b16c8ec2-56c3-422b-b91d-6d34346c9a68 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.747589833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b16c8ec2-56c3-422b-b91d-6d34346c9a68 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.748730387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a748186-6c1d-45b8-90a9-ccb08463272f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.750204544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388613750173443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a748186-6c1d-45b8-90a9-ccb08463272f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.750925976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63866c35-5a3c-421c-8af8-db62260e2199 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.751523538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63866c35-5a3c-421c-8af8-db62260e2199 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.752751872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labe
ls:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c
0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63866c35-5a3c-421c-8af8-db62260e2199 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.795924770Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4141d593-d63d-458e-be23-9cae258f0ada name=/runtime.v1.RuntimeService/Version
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.796022328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4141d593-d63d-458e-be23-9cae258f0ada name=/runtime.v1.RuntimeService/Version
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.797907934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c38e46e-b2e7-417c-8d06-9a029f101fa0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.799128418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759388613799034829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:494447,},InodesUsed:&UInt64Value{Value:176,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c38e46e-b2e7-417c-8d06-9a029f101fa0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.799833426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ff0aa27-3b02-46b8-a954-b38ce41692f6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.799895151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ff0aa27-3b02-46b8-a954-b38ce41692f6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:03:33 addons-535714 crio[827]: time="2025-10-02 07:03:33.801244618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86667c9385b6747dd5367a3bac699b68f1e8977065a4adf9cfe75c25f7988f30,PodSandboxId:2fe38d26ed81edf4523f884bf8a6e093a0504f3898458b3b8a848238df4af302,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759388455787733854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dbf8075-8246-45d0-ae37-79da4f9f9d3b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1593fcd2d1f19e1b545b0e61e26e930921bd0869aa8561520521bae06e290f,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1759388450084597292,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4c3a8c0ea5cfd89ba9d1b44492275163aa57251f009837493367f6217d1725,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1759388448399422371,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65d9fdba36a17f1a90b459eeee3648bacb13df988b15b19fc279430769ac1934,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1759388446813164181,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.
kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81f190fa89d8e5cc7defafbcfe7e9680165f222b8cb38089107697d481f07044,PodSandboxId:2c0a4b75d16bbd0cc35c4d9794b9c537c845604f91f9b9717e0e33821b236d21,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759388442848671734,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-jcwrw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e85381c5-c30c-4dcd-92d1-a7757ea
f3d60,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0683a8b55d03d936cabce574b04d2a72c7c35e84f316d16f46e1dccb91fc7f06,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1759388435334628877,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3456f5ab4e9dbe404796773873f64be62d6b81bec8e0530a56835592c720f84b,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1759388403765690243,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6808e1f9304e8cef617aba76bd29b3cab6a2bdfb44cdbeb855308750024149,PodSandboxId:dabf0b0e1eb703dea619c13e9309d343e9f3e85d72091238405bb648568efbd8,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1759388402291958722,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a933762-fa4f-4072-8b4b-d8b6c46d4f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24139e6a7a8b119c621684982bbcefca2cae2ac4e7bb729780ca16b694b55fbd,PodSandboxId:e2ed9baa384a5d03db7cd6cfd668bcc454aa679448b86e4a773a83f9858a2676,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1759388400909296254,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27de7994-2f0d-4f74-a4f7-7e22d4971553,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46de36d65127e19985f27efeb068f42cc63a26d4810d73147e7ade4bd37118f1,PodSandboxId:e2277305f110b148564a81d503d97856191e5bb4b7b0f223203b29cfff1b72ab,Metada
ta:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1759388399256836094,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8sjk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914e6ab5-a344-4664-a33a-b4909c1b7903,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d5407fe4705d49530b9761c4cebd9fe6d4ebe3c7d6
2b7716b4152cd402ebba,PodSandboxId:e2ad15837b991c05439a565e469ada889d2bd5051f2a49bf2322d498ea6c9853,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397460422951,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-g4hd4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f552d1e8-79a8-4bf6-be47-26aa19781b53,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:ea44a6e53635f03b784f087b0e164539221fdc7443ba3f7dda600bfda5c82cb9,PodSandboxId:bbec6993c46f777ba39bf5ce5a3530ffd5bf08e697630fe0a8c76d2f43aead1e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1759388397345200897,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-knwl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcee0c5b-2829-4ba3-82ad-31430c403352,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f84e33ebf14f877a74c95ffe64529b6d94861f8a7eaa248a4ffb25ef2d96735,PodSandboxId:45c7f94d02bfb1103c4fe9dc462cb2bf12225a771a8668cc0b1015499c67ec42,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395732259114,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46z2n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d7c75803-8d7e-4862-96d6-b73cd0af407b,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce0b3e6c8fef17da33743862e18cba8c4404198a2057ee5e8ffb2be25604044,PodSandboxId:13a0722f22fb7608b2e886e7a5ac018995435904bce7beb4672098244c66ad81,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759388395629748197,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jsw7z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3da341f-b7b3-4d68-9e03-1921f3c1cc30,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20e001ce5fa7c70d55fb2a59f8e98682b37d10113dcad1a0cfeeb413afbe04b,PodSandboxId:53cbb87b563ff82efb57dc78b0af715ad70fbf167a6d5cd32e3965bba81e97c8,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759388393845709573,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2hn79,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 06f70d92-86d6-4308-912f-5496d2127813,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fad243c3b2c74fb08eab712c42df177b4fe6fc950caa69393b80b3057304,PodSandboxId:99eafaf0bf06bd5053979a2666ca23b8cc837683956eb400d18c4957989b049a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759388359467663085,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-gf62q,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: b32e0acb-20af-4794-8b5f-441cdf181bf1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68a602009da40f757148477cd12a6a35b2293a4c0451232c539a42627e52857,PodSandboxId:1239599eb3508337c290825759ab7983ceaff236018a1704749421a0b32be07e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759388320710566949,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 0db8a359-0034-4d93-9741-a13248109f50,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75992e0dff6a5f40f6a9c531910be7be9a867d5070be1b86eccef82c570a21f0,PodSandboxId:0863b64ffcb347389d632e5a53011f1bb4f718008d42dd21c8855c82e531fbc5,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,State:CONTAINER_RUNNING,CreatedAt:1759388312372256052,Labe
ls:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-85f6b7fc65-hh72s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90b98e30-4d59-46a7-a911-3e347c8cffe8,},Annotations:map[string]string{io.kubernetes.container.hash: d5196bf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0,PodSandboxId:348af25e84579cbff58d1b8545342356c8da369ec73f01795b09ace584ee0d8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759388288541266035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e38a8c17-a75a-460e-bf52-2fc7f98d9595,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58aa192645e9640ab8747f01e3811d432c8cdf789fd66f92bb1177ab7ade3182,PodSandboxId:dba3c496294556a243af4b622cbb0b7c5d02527f48806160b28ac2e6877df1b8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759388285187697060,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-f7qcs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 789f2b98-37d8-40b1-9d96-0943237a099a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb,PodSandboxId:4fcabfc373e6072b7384a69d01a85827da161f07d92f19fff4d9e395ee772b3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[str
ing]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759388277591786131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-w7hjm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df6c56bd-f409-4243-8017-c7b13bcd2610,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b,PodSandboxId:646600c8d86f77df2eeb7aaeb300a8e38f489360809e08b47941fdad055bab11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759388276182970344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z495t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff433508-be20-4930-a1bf-51f227b0c22a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2,PodSandboxId:c7d4e0eb984a2b214c866037074091c006d29e0fa2f0d276d0b98b0d8f2f46cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759388265023839255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62755d9f269e8b989fe8a7e42cd0929e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca,PodSandboxId:36d2846a22a8444e1de7954e6a05a383ead6254eb354b3c5208bd4dcd347bad5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759388265007942041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82b7e949b593c9ca57ac1bb4c8035739,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20,PodSandboxId:63f4cb9d3437a25ee283bc1b5514db63e3f02fc1962a3056d2c15bc8d50e1039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759388264993758106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a05d1730ec0bee3f4668d7a7ad72c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68,PodSandboxId:35f49d5f3b8fba6160ea00d2423e08320a353b56ba9ec80f2e0699d6eaa5e9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759388264962624844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-535714,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9a46cc8335aab1c7c42675d7e4c
0ebb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ff0aa27-3b02-46b8-a954-b38ce41692f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	86667c9385b67       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   2fe38d26ed81e       busybox
	6e1593fcd2d1f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago       Running             csi-snapshotter                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f4c3a8c0ea5c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago       Running             csi-provisioner                          0                   e2277305f110b       csi-hostpathplugin-8sjk8
	65d9fdba36a17       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago       Running             liveness-probe                           0                   e2277305f110b       csi-hostpathplugin-8sjk8
	81f190fa89d8e       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             2 minutes ago       Running             controller                               0                   2c0a4b75d16bb       ingress-nginx-controller-9cc49f96f-jcwrw
	0683a8b55d03d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago       Running             hostpath                                 0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3456f5ab4e9db       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   e2277305f110b       csi-hostpathplugin-8sjk8
	3f6808e1f9304       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   dabf0b0e1eb70       csi-hostpath-resizer-0
	24139e6a7a8b1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   e2ed9baa384a5       csi-hostpath-attacher-0
	46de36d65127e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   e2277305f110b       csi-hostpathplugin-8sjk8
	98d5407fe4705       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   e2ad15837b991       snapshot-controller-7d9fbc56b8-g4hd4
	ea44a6e53635f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   bbec6993c46f7       snapshot-controller-7d9fbc56b8-knwl8
	2f84e33ebf14f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   3 minutes ago       Exited              patch                                    0                   45c7f94d02bfb       ingress-nginx-admission-patch-46z2n
	5ce0b3e6c8fef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   3 minutes ago       Exited              create                                   0                   13a0722f22fb7       ingress-nginx-admission-create-jsw7z
	d20e001ce5fa7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            3 minutes ago       Running             gadget                                   0                   53cbb87b563ff       gadget-2hn79
	b1d2fad243c3b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago       Running             local-path-provisioner                   0                   99eafaf0bf06b       local-path-provisioner-648f6765c9-gf62q
	c68a602009da4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago       Running             minikube-ingress-dns                     0                   1239599eb3508       kube-ingress-dns-minikube
	75992e0dff6a5       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               5 minutes ago       Running             cloud-spanner-emulator                   0                   0863b64ffcb34       cloud-spanner-emulator-85f6b7fc65-hh72s
	0f29426982799       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             5 minutes ago       Running             storage-provisioner                      0                   348af25e84579       storage-provisioner
	58aa192645e96       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     5 minutes ago       Running             amd-gpu-device-plugin                    0                   dba3c49629455       amd-gpu-device-plugin-f7qcs
	6e31cb36c4500       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             5 minutes ago       Running             coredns                                  0                   4fcabfc373e60       coredns-66bc5c9577-w7hjm
	fb130499febb3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             5 minutes ago       Running             kube-proxy                               0                   646600c8d86f7       kube-proxy-z495t
	466837c8cdfcc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   c7d4e0eb984a2       etcd-addons-535714
	da8295539fc0e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago       Running             kube-scheduler                           0                   36d2846a22a84       kube-scheduler-addons-535714
	da58df3cad660       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago       Running             kube-controller-manager                  0                   63f4cb9d3437a       kube-controller-manager-addons-535714
	deaf436584a26       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago       Running             kube-apiserver                           0                   35f49d5f3b8fb       kube-apiserver-addons-535714
	
	
	==> coredns [6e31cb36c4500afc7842ba83cf25da6467b08d5093a936fed9e22765a4d5bcbb] <==
	[INFO] 10.244.0.7:35110 - 11487 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000105891s
	[INFO] 10.244.0.7:35110 - 31639 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000100284s
	[INFO] 10.244.0.7:35110 - 25746 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080168s
	[INFO] 10.244.0.7:35110 - 43819 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000100728s
	[INFO] 10.244.0.7:35110 - 63816 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000124028s
	[INFO] 10.244.0.7:35110 - 35022 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000129164s
	[INFO] 10.244.0.7:35110 - 28119 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001725128s
	[INFO] 10.244.0.7:50584 - 36630 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148556s
	[INFO] 10.244.0.7:50584 - 36962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067971s
	[INFO] 10.244.0.7:37190 - 758 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052949s
	[INFO] 10.244.0.7:37190 - 1043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051809s
	[INFO] 10.244.0.7:37461 - 4143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057036s
	[INFO] 10.244.0.7:37461 - 4397 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049832s
	[INFO] 10.244.0.7:36180 - 39849 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111086s
	[INFO] 10.244.0.7:36180 - 40050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069757s
	[INFO] 10.244.0.23:54237 - 52266 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001020809s
	[INFO] 10.244.0.23:46188 - 47837 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000755825s
	[INFO] 10.244.0.23:50620 - 40298 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145474s
	[INFO] 10.244.0.23:46344 - 40921 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123896s
	[INFO] 10.244.0.23:50353 - 65439 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000272665s
	[INFO] 10.244.0.23:50633 - 23346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143762s
	[INFO] 10.244.0.23:52616 - 28857 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002777615s
	[INFO] 10.244.0.23:55533 - 44086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003112269s
	[INFO] 10.244.0.27:55844 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000811242s
	[INFO] 10.244.0.27:51921 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000498985s
	
	
	==> describe nodes <==
	Name:               addons-535714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-535714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-535714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_57_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-535714
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-535714"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:57:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-535714
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:03:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:02:26 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:02:26 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:02:26 +0000   Thu, 02 Oct 2025 06:57:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:02:26 +0000   Thu, 02 Oct 2025 06:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    addons-535714
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 26ed18e3cae343e2ba2a85be4a0a7371
	  System UUID:                26ed18e3-cae3-43e2-ba2a-85be4a0a7371
	  Boot ID:                    73babc46-f812-4e67-b425-db513a204e97
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     cloud-spanner-emulator-85f6b7fc65-hh72s                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  gadget                      gadget-2hn79                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jcwrw                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m29s
	  kube-system                 amd-gpu-device-plugin-f7qcs                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 coredns-66bc5c9577-w7hjm                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m38s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 csi-hostpathplugin-8sjk8                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 etcd-addons-535714                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m45s
	  kube-system                 kube-apiserver-addons-535714                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-controller-manager-addons-535714                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-proxy-z495t                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-addons-535714                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 snapshot-controller-7d9fbc56b8-g4hd4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-knwl8                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  local-path-storage          helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  local-path-storage          local-path-provisioner-648f6765c9-gf62q                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-hpzfn                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m36s  kube-proxy       
	  Normal  Starting                 5m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m43s  kubelet          Node addons-535714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s  kubelet          Node addons-535714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s  kubelet          Node addons-535714 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m42s  kubelet          Node addons-535714 status is now: NodeReady
	  Normal  RegisteredNode           5m40s  node-controller  Node addons-535714 event: Registered Node addons-535714 in Controller
	
	
	==> dmesg <==
	[  +0.288437] kauditd_printk_skb: 263 callbacks suppressed
	[  +1.352194] hrtimer: interrupt took 6208704 ns
	[  +0.000018] kauditd_printk_skb: 341 callbacks suppressed
	[ +14.846599] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.362026] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.319606] kauditd_printk_skb: 17 callbacks suppressed
	[Oct 2 06:59] kauditd_printk_skb: 20 callbacks suppressed
	[ +33.860109] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.779557] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 2 07:00] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.976810] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.109220] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.510995] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.560914] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.223140] kauditd_printk_skb: 56 callbacks suppressed
	[Oct 2 07:01] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.884695] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.185211] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.060908] kauditd_printk_skb: 58 callbacks suppressed
	[Oct 2 07:02] kauditd_printk_skb: 10 callbacks suppressed
	[  +1.331616] kauditd_printk_skb: 17 callbacks suppressed
	[  +2.250929] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.000032] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [466837c8cdfcc560afba132ba43ec776add2cc983436441372f583858cb57aa2] <==
	{"level":"info","ts":"2025-10-02T06:59:59.120512Z","caller":"traceutil/trace.go:172","msg":"trace[1384240821] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"205.082518ms","start":"2025-10-02T06:59:58.915407Z","end":"2025-10-02T06:59:59.120489Z","steps":["trace[1384240821] 'read index received'  (duration: 205.072637ms)","trace[1384240821] 'applied index is now lower than readState.Index'  (duration: 8.699µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T06:59:59.121116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.148075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-46z2n\" limit:1 ","response":"range_response_count:1 size:4635"}
	{"level":"info","ts":"2025-10-02T06:59:59.121160Z","caller":"traceutil/trace.go:172","msg":"trace[787006594] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-46z2n; range_end:; response_count:1; response_revision:1085; }","duration":"198.245202ms","start":"2025-10-02T06:59:58.922907Z","end":"2025-10-02T06:59:59.121152Z","steps":["trace[787006594] 'agreement among raft nodes before linearized reading'  (duration: 198.083065ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T06:59:59.121300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.835357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T06:59:59.121339Z","caller":"traceutil/trace.go:172","msg":"trace[1316712396] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"183.87568ms","start":"2025-10-02T06:59:58.937457Z","end":"2025-10-02T06:59:59.121332Z","steps":["trace[1316712396] 'agreement among raft nodes before linearized reading'  (duration: 183.815946ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:32.832647Z","caller":"traceutil/trace.go:172","msg":"trace[1453851995] linearizableReadLoop","detail":"{readStateIndex:1231; appliedIndex:1231; }","duration":"220.066962ms","start":"2025-10-02T07:00:32.612509Z","end":"2025-10-02T07:00:32.832576Z","steps":["trace[1453851995] 'read index received'  (duration: 220.05963ms)","trace[1453851995] 'applied index is now lower than readState.Index'  (duration: 6.189µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T07:00:32.832730Z","caller":"traceutil/trace.go:172","msg":"trace[302351669] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"243.94686ms","start":"2025-10-02T07:00:32.588772Z","end":"2025-10-02T07:00:32.832719Z","steps":["trace[302351669] 'process raft request'  (duration: 243.833114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:32.832967Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.479862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-02T07:00:32.833001Z","caller":"traceutil/trace.go:172","msg":"trace[1089606970] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1180; }","duration":"220.525584ms","start":"2025-10-02T07:00:32.612469Z","end":"2025-10-02T07:00:32.832995Z","steps":["trace[1089606970] 'agreement among raft nodes before linearized reading'  (duration: 220.422716ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.990824Z","caller":"traceutil/trace.go:172","msg":"trace[1822440841] linearizableReadLoop","detail":"{readStateIndex:1259; appliedIndex:1259; }","duration":"216.288139ms","start":"2025-10-02T07:00:39.774473Z","end":"2025-10-02T07:00:39.990762Z","steps":["trace[1822440841] 'read index received'  (duration: 216.279919ms)","trace[1822440841] 'applied index is now lower than readState.Index'  (duration: 6.642µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:00:39.991358Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.077704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:00:39.991456Z","caller":"traceutil/trace.go:172","msg":"trace[1082597067] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1206; }","duration":"217.190679ms","start":"2025-10-02T07:00:39.774258Z","end":"2025-10-02T07:00:39.991449Z","steps":["trace[1082597067] 'agreement among raft nodes before linearized reading'  (duration: 216.738402ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:00:39.992313Z","caller":"traceutil/trace.go:172","msg":"trace[515400758] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"337.963385ms","start":"2025-10-02T07:00:39.654341Z","end":"2025-10-02T07:00:39.992305Z","steps":["trace[515400758] 'process raft request'  (duration: 337.312964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:00:39.992477Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.654280Z","time spent":"338.099015ms","remote":"127.0.0.1:56776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1205 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-02T07:00:39.994757Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T07:00:39.655974Z","time spent":"338.780211ms","remote":"127.0.0.1:56512","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-10-02T07:02:18.249354Z","caller":"traceutil/trace.go:172","msg":"trace[1937839981] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"110.209012ms","start":"2025-10-02T07:02:18.139042Z","end":"2025-10-02T07:02:18.249251Z","steps":["trace[1937839981] 'process raft request'  (duration: 107.760601ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T07:02:25.358154Z","caller":"traceutil/trace.go:172","msg":"trace[1514029901] linearizableReadLoop","detail":"{readStateIndex:1683; appliedIndex:1683; }","duration":"269.707219ms","start":"2025-10-02T07:02:25.088427Z","end":"2025-10-02T07:02:25.358135Z","steps":["trace[1514029901] 'read index received'  (duration: 269.698824ms)","trace[1514029901] 'applied index is now lower than readState.Index'  (duration: 7.137µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T07:02:25.358835Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.337456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.358908Z","caller":"traceutil/trace.go:172","msg":"trace[129833481] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1605; }","duration":"270.47424ms","start":"2025-10-02T07:02:25.088423Z","end":"2025-10-02T07:02:25.358898Z","steps":["trace[129833481] 'agreement among raft nodes before linearized reading'  (duration: 270.303097ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.361904Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.156634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.361957Z","caller":"traceutil/trace.go:172","msg":"trace[228810763] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:1605; }","duration":"257.224721ms","start":"2025-10-02T07:02:25.104724Z","end":"2025-10-02T07:02:25.361949Z","steps":["trace[228810763] 'agreement among raft nodes before linearized reading'  (duration: 257.141662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.13527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363670Z","caller":"traceutil/trace.go:172","msg":"trace[2116337020] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1606; }","duration":"129.197912ms","start":"2025-10-02T07:02:25.234464Z","end":"2025-10-02T07:02:25.363662Z","steps":["trace[2116337020] 'agreement among raft nodes before linearized reading'  (duration: 129.113844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T07:02:25.363900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.575698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T07:02:25.363939Z","caller":"traceutil/trace.go:172","msg":"trace[2132272707] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1606; }","duration":"192.616449ms","start":"2025-10-02T07:02:25.171317Z","end":"2025-10-02T07:02:25.363933Z","steps":["trace[2132272707] 'agreement among raft nodes before linearized reading'  (duration: 192.563634ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:03:34 up 6 min,  0 users,  load average: 1.08, 1.46, 0.80
	Linux addons-535714 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [deaf436584a262db3056c954f385faad7a33153116e080b9996154d84a419b68] <==
	W1002 06:59:04.261853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:04.262015       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 06:59:04.262027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:59:04.261865       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:04.262054       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 06:59:04.263426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:59:19.669740       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:59:19.669928       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 06:59:19.671457       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.672416       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.677780       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	E1002 06:59:19.698801       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.39.52:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.39.52:443: connect: connection refused" logger="UnhandledError"
	I1002 06:59:19.813028       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 07:01:02.988144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:59036: use of closed network connection
	E1002 07:01:03.204248       1 conn.go:339] Error on socket receive: read tcp 192.168.39.164:8443->192.168.39.1:59068: use of closed network connection
	I1002 07:01:12.103579       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 07:01:12.401820       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.38.17"}
	I1002 07:01:12.978874       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.6.38"}
	I1002 07:01:20.686056       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [da58df3cad6606b436fb29751b272c45216a615941a3f7eddc44643e3e9dce20] <==
	I1002 06:57:54.853402       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 06:57:54.853436       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 06:57:54.854794       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 06:57:54.854865       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:57:54.855046       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 06:57:54.858148       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 06:57:54.858221       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 06:57:54.858258       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 06:57:54.858263       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 06:57:54.858268       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 06:57:54.860904       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:57:54.863351       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 06:57:54.869106       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-535714" podCIDRs=["10.244.0.0/24"]
	E1002 06:58:03.439760       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:58:24.819245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:24.819664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 06:58:24.819801       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 06:58:24.847762       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 06:58:24.855798       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 06:58:24.921306       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:58:24.957046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:58:54.928427       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:58:54.966681       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 07:01:16.701698       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1002 07:02:37.947143       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [fb130499febb3c9d471b7bb358f081873e82c3a33b0f074d3538ddd7ada1101b] <==
	I1002 06:57:56.940558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:57:57.042011       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:57:57.042117       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.164"]
	E1002 06:57:57.042205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:57:57.167383       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 06:57:57.167427       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 06:57:57.167460       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:57:57.190949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:57:57.192886       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:57:57.192902       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:57:57.294325       1 config.go:200] "Starting service config controller"
	I1002 06:57:57.294358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:57:57.294429       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:57:57.294434       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:57:57.294455       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:57:57.294459       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:57:57.438397       1 config.go:309] "Starting node config controller"
	I1002 06:57:57.441950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:57:57.479963       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:57:57.494463       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:57:57.494530       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:57:57.494543       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [da8295539fc0e9657c42c4819e69d8fc440a678ed70b19deddada19eac36b5ca] <==
	E1002 06:57:47.853654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:57:47.853709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:47.853767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:57:47.853824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:57:47.854040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:47.855481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:47.854491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.707149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:57:48.761606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:57:48.783806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:57:48.817274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:57:48.856898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 06:57:48.856969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 06:57:48.860214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:57:48.880906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:57:48.896863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:57:48.913429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:57:48.964287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:57:48.985241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:57:49.005874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:57:49.118344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:57:49.123456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:57:49.157781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:57:49.202768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 06:57:51.042340       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:02:33 addons-535714 kubelet[1509]: I1002 07:02:33.469427    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wnv\" (UniqueName: \"kubernetes.io/projected/a634787a-884e-490d-a562-e4dad0c17231-kube-api-access-94wnv\") pod \"helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3\" (UID: \"a634787a-884e-490d-a562-e4dad0c17231\") " pod="local-path-storage/helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3"
	Oct 02 07:02:33 addons-535714 kubelet[1509]: I1002 07:02:33.469464    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a634787a-884e-490d-a562-e4dad0c17231-data\") pod \"helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3\" (UID: \"a634787a-884e-490d-a562-e4dad0c17231\") " pod="local-path-storage/helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3"
	Oct 02 07:02:33 addons-535714 kubelet[1509]: I1002 07:02:33.469486    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a634787a-884e-490d-a562-e4dad0c17231-script\") pod \"helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3\" (UID: \"a634787a-884e-490d-a562-e4dad0c17231\") " pod="local-path-storage/helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3"
	Oct 02 07:02:40 addons-535714 kubelet[1509]: I1002 07:02:40.329220    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znf77\" (UniqueName: \"kubernetes.io/projected/2f677461-445c-4e2a-aeaa-28f894f29b0b-kube-api-access-znf77\") pod \"task-pv-pod\" (UID: \"2f677461-445c-4e2a-aeaa-28f894f29b0b\") " pod="default/task-pv-pod"
	Oct 02 07:02:40 addons-535714 kubelet[1509]: I1002 07:02:40.329274    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2c4b6ba5-745c-4f9c-a12c-e5d9604279a1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c8c4905e-9f5d-11f0-96f3-e64440f40013\") pod \"task-pv-pod\" (UID: \"2f677461-445c-4e2a-aeaa-28f894f29b0b\") " pod="default/task-pv-pod"
	Oct 02 07:02:40 addons-535714 kubelet[1509]: I1002 07:02:40.477294    1509 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-2c4b6ba5-745c-4f9c-a12c-e5d9604279a1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c8c4905e-9f5d-11f0-96f3-e64440f40013\") pod \"task-pv-pod\" (UID: \"2f677461-445c-4e2a-aeaa-28f894f29b0b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/8b62f2688ed6046fbe812d6d1a2d2ec7258f6388fa338ebeb01d53e2ecaec8fe/globalmount\"" pod="default/task-pv-pod"
	Oct 02 07:02:41 addons-535714 kubelet[1509]: E1002 07:02:41.647779    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388561646894621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:02:41 addons-535714 kubelet[1509]: E1002 07:02:41.647825    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388561646894621  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:02:51 addons-535714 kubelet[1509]: E1002 07:02:51.650918    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388571650482633  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:02:51 addons-535714 kubelet[1509]: E1002 07:02:51.651286    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388571650482633  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:02:57 addons-535714 kubelet[1509]: E1002 07:02:57.316856    1509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 07:02:57 addons-535714 kubelet[1509]: E1002 07:02:57.316938    1509 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 02 07:02:57 addons-535714 kubelet[1509]: E1002 07:02:57.317190    1509 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(c134160b-cfc5-4bda-9771-650c3dc1da25): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:02:57 addons-535714 kubelet[1509]: E1002 07:02:57.317231    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:03:01 addons-535714 kubelet[1509]: E1002 07:03:01.654423    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388581653618259  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:03:01 addons-535714 kubelet[1509]: E1002 07:03:01.654467    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388581653618259  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:03:10 addons-535714 kubelet[1509]: E1002 07:03:10.178152    1509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="c134160b-cfc5-4bda-9771-650c3dc1da25"
	Oct 02 07:03:11 addons-535714 kubelet[1509]: E1002 07:03:11.656881    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388591656195942  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:03:11 addons-535714 kubelet[1509]: E1002 07:03:11.656904    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388591656195942  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:03:12 addons-535714 kubelet[1509]: I1002 07:03:12.173930    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-hh72s" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:03:21 addons-535714 kubelet[1509]: E1002 07:03:21.661158    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388601659766598  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:03:21 addons-535714 kubelet[1509]: E1002 07:03:21.661268    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388601659766598  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:03:22 addons-535714 kubelet[1509]: I1002 07:03:22.173668    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:03:31 addons-535714 kubelet[1509]: E1002 07:03:31.663755    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759388611663507503  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	Oct 02 07:03:31 addons-535714 kubelet[1509]: E1002 07:03:31.663777    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759388611663507503  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:494447}  inodes_used:{value:176}}"
	
	
	==> storage-provisioner [0f2942698279982b8406ac059639acb1c1f13cdf51b7860d2c43431f455553b0] <==
	W1002 07:03:09.108858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:11.112842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:11.118016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:13.121590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:13.127316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:15.130223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:15.137812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:17.141692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:17.147219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:19.152517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:19.158635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:21.162454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:21.172152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:23.176424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:23.185903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:25.190114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:25.196215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:27.200309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:27.205497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:29.209137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:29.213859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:31.217997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:31.226237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:33.229900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:03:33.235224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535714 -n addons-535714
helpers_test.go:269: (dbg) Run:  kubectl --context addons-535714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3 yakd-dashboard-5ff678cb9-hpzfn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3 yakd-dashboard-5ff678cb9-hpzfn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3 yakd-dashboard-5ff678cb9-hpzfn: exit status 1 (106.16497ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:01:12 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxhkh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jxhkh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m23s                default-scheduler  Successfully assigned default/nginx to addons-535714
	  Warning  Failed     38s (x2 over 80s)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     38s (x2 over 80s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    25s (x2 over 79s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     25s (x2 over 79s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    14s (x3 over 2m22s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535714/192.168.39.164
	Start Time:       Thu, 02 Oct 2025 07:02:40 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-znf77 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-znf77:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  55s   default-scheduler  Successfully assigned default/task-pv-pod to addons-535714
	  Normal  Pulling    54s   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g48lf (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-g48lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jsw7z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-46z2n" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-hpzfn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-535714 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-jsw7z ingress-nginx-admission-patch-46z2n helper-pod-create-pvc-3fd1928b-f1d1-4931-ad40-22c18d3043b3 yakd-dashboard-5ff678cb9-hpzfn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 addons disable yakd --alsologtostderr -v=1: (10.927908382s)
--- FAIL: TestAddons/parallel/Yakd (133.99s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-365308 --alsologtostderr -v=1]
E1002 07:15:52.833077  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:16:20.548738  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-365308 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-365308 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-365308 --alsologtostderr -v=1] stderr:
I1002 07:15:35.329417  581120 out.go:360] Setting OutFile to fd 1 ...
I1002 07:15:35.329690  581120 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:15:35.329703  581120 out.go:374] Setting ErrFile to fd 2...
I1002 07:15:35.329710  581120 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:15:35.329996  581120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
I1002 07:15:35.330410  581120 mustload.go:65] Loading cluster: functional-365308
I1002 07:15:35.330785  581120 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:15:35.331412  581120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:15:35.331502  581120 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:15:35.345673  581120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
I1002 07:15:35.346275  581120 main.go:141] libmachine: () Calling .GetVersion
I1002 07:15:35.346851  581120 main.go:141] libmachine: Using API Version  1
I1002 07:15:35.346879  581120 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:15:35.347246  581120 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:15:35.347450  581120 main.go:141] libmachine: (functional-365308) Calling .GetState
I1002 07:15:35.349259  581120 host.go:66] Checking if "functional-365308" exists ...
I1002 07:15:35.349574  581120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:15:35.349624  581120 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:15:35.363705  581120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
I1002 07:15:35.364289  581120 main.go:141] libmachine: () Calling .GetVersion
I1002 07:15:35.364934  581120 main.go:141] libmachine: Using API Version  1
I1002 07:15:35.364960  581120 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:15:35.365446  581120 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:15:35.365711  581120 main.go:141] libmachine: (functional-365308) Calling .DriverName
I1002 07:15:35.365879  581120 api_server.go:166] Checking apiserver status ...
I1002 07:15:35.365959  581120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 07:15:35.365988  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHHostname
I1002 07:15:35.369403  581120 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:15:35.369832  581120 main.go:141] libmachine: (functional-365308) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f1:3d", ip: ""} in network mk-functional-365308: {Iface:virbr1 ExpiryTime:2025-10-02 08:11:41 +0000 UTC Type:0 Mac:52:54:00:64:f1:3d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-365308 Clientid:01:52:54:00:64:f1:3d}
I1002 07:15:35.369854  581120 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined IP address 192.168.39.84 and MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:15:35.370050  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHPort
I1002 07:15:35.370247  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHKeyPath
I1002 07:15:35.370414  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHUsername
I1002 07:15:35.370583  581120 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/functional-365308/id_rsa Username:docker}
I1002 07:15:35.468118  581120 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6306/cgroup
W1002 07:15:35.483856  581120 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6306/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 07:15:35.483939  581120 ssh_runner.go:195] Run: ls
I1002 07:15:35.489749  581120 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8441/healthz ...
I1002 07:15:35.496597  581120 api_server.go:279] https://192.168.39.84:8441/healthz returned 200:
ok
W1002 07:15:35.496670  581120 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 07:15:35.496851  581120 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:15:35.496869  581120 addons.go:69] Setting dashboard=true in profile "functional-365308"
I1002 07:15:35.496878  581120 addons.go:238] Setting addon dashboard=true in "functional-365308"
I1002 07:15:35.496917  581120 host.go:66] Checking if "functional-365308" exists ...
I1002 07:15:35.497245  581120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:15:35.497301  581120 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:15:35.512547  581120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
I1002 07:15:35.513208  581120 main.go:141] libmachine: () Calling .GetVersion
I1002 07:15:35.513761  581120 main.go:141] libmachine: Using API Version  1
I1002 07:15:35.513783  581120 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:15:35.514186  581120 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:15:35.514726  581120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:15:35.514769  581120 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:15:35.528871  581120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
I1002 07:15:35.529348  581120 main.go:141] libmachine: () Calling .GetVersion
I1002 07:15:35.529783  581120 main.go:141] libmachine: Using API Version  1
I1002 07:15:35.529805  581120 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:15:35.530240  581120 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:15:35.530483  581120 main.go:141] libmachine: (functional-365308) Calling .GetState
I1002 07:15:35.532495  581120 main.go:141] libmachine: (functional-365308) Calling .DriverName
I1002 07:15:35.535864  581120 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 07:15:35.537458  581120 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 07:15:35.538839  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 07:15:35.538864  581120 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 07:15:35.538886  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHHostname
I1002 07:15:35.542234  581120 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:15:35.542708  581120 main.go:141] libmachine: (functional-365308) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f1:3d", ip: ""} in network mk-functional-365308: {Iface:virbr1 ExpiryTime:2025-10-02 08:11:41 +0000 UTC Type:0 Mac:52:54:00:64:f1:3d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-365308 Clientid:01:52:54:00:64:f1:3d}
I1002 07:15:35.542736  581120 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined IP address 192.168.39.84 and MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:15:35.542933  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHPort
I1002 07:15:35.543127  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHKeyPath
I1002 07:15:35.543315  581120 main.go:141] libmachine: (functional-365308) Calling .GetSSHUsername
I1002 07:15:35.543475  581120 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/functional-365308/id_rsa Username:docker}
I1002 07:15:35.645552  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 07:15:35.645583  581120 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 07:15:35.668588  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 07:15:35.668621  581120 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 07:15:35.691573  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 07:15:35.691604  581120 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 07:15:35.713797  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 07:15:35.713829  581120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 07:15:35.738030  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 07:15:35.738062  581120 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 07:15:35.763085  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 07:15:35.763124  581120 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 07:15:35.787783  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 07:15:35.787811  581120 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 07:15:35.811958  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 07:15:35.811985  581120 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 07:15:35.835893  581120 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 07:15:35.835929  581120 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 07:15:35.859164  581120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 07:15:36.603176  581120 main.go:141] libmachine: Making call to close driver server
I1002 07:15:36.603226  581120 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:15:36.603570  581120 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
I1002 07:15:36.603617  581120 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:15:36.603640  581120 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:15:36.603657  581120 main.go:141] libmachine: Making call to close driver server
I1002 07:15:36.603668  581120 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:15:36.603929  581120 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:15:36.603948  581120 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:15:36.603956  581120 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
I1002 07:15:36.606332  581120 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-365308 addons enable metrics-server

                                                
                                                
I1002 07:15:36.607788  581120 addons.go:201] Writing out "functional-365308" config to set dashboard=true...
W1002 07:15:36.608032  581120 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 07:15:36.608717  581120 kapi.go:59] client config for functional-365308: &rest.Config{Host:"https://192.168.39.84:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.key", CAFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 07:15:36.609203  581120 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 07:15:36.609221  581120 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 07:15:36.609225  581120 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 07:15:36.609229  581120 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 07:15:36.609232  581120 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 07:15:36.617246  581120 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  cddbabfe-f052-4452-8ba5-294b2d31313f 852 0 2025-10-02 07:15:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 07:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.63.200,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.63.200],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 07:15:36.617456  581120 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 07:15:36.617532  581120 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-365308 proxy --port 36195]
I1002 07:15:36.617808  581120 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 07:15:36.661262  581120 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 07:15:36.661309  581120 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 07:15:36.669976  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2390a522-fbe6-463d-a523-24c9a9f33170] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc000570240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060aa00 TLS:<nil>}
I1002 07:15:36.670063  581120 retry.go:31] will retry after 100.782µs: Temporary Error: unexpected response code: 503
I1002 07:15:36.674299  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9900ddf6-06b7-4526-91b1-a944b069977f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc00081cdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1002 07:15:36.674356  581120 retry.go:31] will retry after 182.902µs: Temporary Error: unexpected response code: 503
I1002 07:15:36.678059  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6dc273c-342e-4a8b-9361-b329ab533c91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc000570380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c6f00 TLS:<nil>}
I1002 07:15:36.678117  581120 retry.go:31] will retry after 260.573µs: Temporary Error: unexpected response code: 503
I1002 07:15:36.689052  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c181c744-c502-48b1-819c-0be9de1493b5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0007c8840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1002 07:15:36.689129  581120 retry.go:31] will retry after 393.259µs: Temporary Error: unexpected response code: 503
I1002 07:15:36.699187  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54e4597d-42ae-4a58-937d-646cb6de6265] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc00081cf00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060ab40 TLS:<nil>}
I1002 07:15:36.699262  581120 retry.go:31] will retry after 488.854µs: Temporary Error: unexpected response code: 503
I1002 07:15:36.703908  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82eda5a2-1d9d-4ac2-8917-aad88f910380] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0005704c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c7040 TLS:<nil>}
I1002 07:15:36.704016  581120 retry.go:31] will retry after 724.545µs: Temporary Error: unexpected response code: 503
I1002 07:15:36.708986  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c94ade6-8a40-4438-9386-723d66f21eb2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0007c8980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1002 07:15:36.709086  581120 retry.go:31] will retry after 748.559µs: Temporary Error: unexpected response code: 503
I1002 07:15:36.713218  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d7e81c1-fc5a-47e7-9f3a-9b0da387cfa7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc000570640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060ac80 TLS:<nil>}
I1002 07:15:36.713293  581120 retry.go:31] will retry after 1.947015ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.718585  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66157c9f-4495-4aa6-89eb-e553012eacd1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0007c8a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1002 07:15:36.718741  581120 retry.go:31] will retry after 1.286993ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.723759  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4a2e5200-c866-4984-bd7c-375fe75fe428] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc00081d040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b040 TLS:<nil>}
I1002 07:15:36.723814  581120 retry.go:31] will retry after 3.85943ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.732518  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69d00789-6581-45f3-9549-09e6fe116075] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0007c8b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c72c0 TLS:<nil>}
I1002 07:15:36.732603  581120 retry.go:31] will retry after 4.296184ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.741574  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d8e126d5-6726-4357-be6d-ad925ab4bcda] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0005707c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b180 TLS:<nil>}
I1002 07:15:36.741649  581120 retry.go:31] will retry after 10.301433ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.756622  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bd314cc9-cecc-4be2-8c49-4d28bd4eb54f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0007c8c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1002 07:15:36.756703  581120 retry.go:31] will retry after 19.05187ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.779611  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[18711cf9-70d4-4a64-8bac-95e4cfa8b596] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc00081d140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b2c0 TLS:<nil>}
I1002 07:15:36.779724  581120 retry.go:31] will retry after 11.469321ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.797130  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[60f85a0c-8da6-4c74-ba74-c1aa45d8e31b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0007c8d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c7400 TLS:<nil>}
I1002 07:15:36.797226  581120 retry.go:31] will retry after 39.9842ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.841576  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d723e5c5-5974-43bd-a74c-9392235040c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc000570900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b400 TLS:<nil>}
I1002 07:15:36.841651  581120 retry.go:31] will retry after 34.389044ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.882565  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe19a49d-226f-46b8-9726-539c1e12855c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc0007c8e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1002 07:15:36.882652  581120 retry.go:31] will retry after 33.465946ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.924163  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e4fceaf6-7f37-4912-9109-4939c80d5fdc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc000570a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b540 TLS:<nil>}
I1002 07:15:36.924268  581120 retry.go:31] will retry after 49.294676ms: Temporary Error: unexpected response code: 503
I1002 07:15:36.980633  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[381e31e2-1e6f-4197-a087-5f8073e7e43f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:36 GMT]] Body:0xc00081d280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1002 07:15:36.980708  581120 retry.go:31] will retry after 218.77662ms: Temporary Error: unexpected response code: 503
I1002 07:15:37.204316  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9039b16d-eefa-4d88-aa4f-33c9566eeb03] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:37 GMT]] Body:0xc00081d400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c7540 TLS:<nil>}
I1002 07:15:37.204378  581120 retry.go:31] will retry after 331.125496ms: Temporary Error: unexpected response code: 503
I1002 07:15:37.538875  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce777cd7-2d12-44b5-b09e-f950f8e94dec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:37 GMT]] Body:0xc000570b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c7680 TLS:<nil>}
I1002 07:15:37.538974  581120 retry.go:31] will retry after 304.463562ms: Temporary Error: unexpected response code: 503
I1002 07:15:37.847986  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c19ed29-8a3c-406f-94a5-a26dbd0bf599] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:37 GMT]] Body:0xc00081d500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I1002 07:15:37.848054  581120 retry.go:31] will retry after 333.344117ms: Temporary Error: unexpected response code: 503
I1002 07:15:38.184847  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6b985ed6-ef22-4527-b925-335e612d5f21] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:38 GMT]] Body:0xc0007c8f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c77c0 TLS:<nil>}
I1002 07:15:38.184947  581120 retry.go:31] will retry after 652.327098ms: Temporary Error: unexpected response code: 503
I1002 07:15:38.841428  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa18fa25-89ea-4903-9a73-3651c1862715] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:38 GMT]] Body:0xc000570c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b680 TLS:<nil>}
I1002 07:15:38.841527  581120 retry.go:31] will retry after 713.954128ms: Temporary Error: unexpected response code: 503
I1002 07:15:39.559724  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c5460007-b6c0-4e8c-ad81-c46b33d1ba99] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:39 GMT]] Body:0xc00081d600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I1002 07:15:39.559806  581120 retry.go:31] will retry after 1.271069981s: Temporary Error: unexpected response code: 503
I1002 07:15:40.834691  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[58f340df-c8b4-46cf-b597-9d44a4d883ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:40 GMT]] Body:0xc0007c9040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c7900 TLS:<nil>}
I1002 07:15:40.834763  581120 retry.go:31] will retry after 2.535019271s: Temporary Error: unexpected response code: 503
I1002 07:15:43.376636  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ccab5220-0011-486c-afa5-61fa174a1cb9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:43 GMT]] Body:0xc0006e7a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b7c0 TLS:<nil>}
I1002 07:15:43.376702  581120 retry.go:31] will retry after 2.598913673s: Temporary Error: unexpected response code: 503
I1002 07:15:45.979211  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[279e38b0-4277-4d62-a7ba-2161789e2c14] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:45 GMT]] Body:0xc000570e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf680 TLS:<nil>}
I1002 07:15:45.979285  581120 retry.go:31] will retry after 8.343009521s: Temporary Error: unexpected response code: 503
I1002 07:15:54.329185  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[375fcef4-4111-461d-aad8-0272e3f52d78] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:15:54 GMT]] Body:0xc0007c9140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf7c0 TLS:<nil>}
I1002 07:15:54.329279  581120 retry.go:31] will retry after 7.240394598s: Temporary Error: unexpected response code: 503
I1002 07:16:01.574426  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eee47803-8d5a-49e9-afd6-6897128bb2a9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:16:01 GMT]] Body:0xc0006e7c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I1002 07:16:01.574511  581120 retry.go:31] will retry after 13.554330028s: Temporary Error: unexpected response code: 503
I1002 07:16:15.136050  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b21a4df-2343-4eef-8617-a974837ba5ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:16:15 GMT]] Body:0xc0007c91c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf900 TLS:<nil>}
I1002 07:16:15.136174  581120 retry.go:31] will retry after 20.63561731s: Temporary Error: unexpected response code: 503
I1002 07:16:35.775903  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96642b0f-b498-4dbc-9ad4-70a2c181980d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:16:35 GMT]] Body:0xc0006e7d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfb80 TLS:<nil>}
I1002 07:16:35.775973  581120 retry.go:31] will retry after 32.870982241s: Temporary Error: unexpected response code: 503
I1002 07:17:08.651993  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4e61fd3-7a68-4e22-a3ad-1c08d6147606] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:17:08 GMT]] Body:0xc0006e7e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfcc0 TLS:<nil>}
I1002 07:17:08.652060  581120 retry.go:31] will retry after 25.484811602s: Temporary Error: unexpected response code: 503
I1002 07:17:34.143415  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c296765-e735-4ba3-bc7a-4c70a219e9eb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:17:34 GMT]] Body:0xc0006e7ec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00060b900 TLS:<nil>}
I1002 07:17:34.143485  581120 retry.go:31] will retry after 1m4.178942099s: Temporary Error: unexpected response code: 503
I1002 07:18:38.330630  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4e5c5d89-1c58-41a8-9ba8-469c100db577] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:18:38 GMT]] Body:0xc00081c900 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf400 TLS:<nil>}
I1002 07:18:38.330749  581120 retry.go:31] will retry after 57.065106446s: Temporary Error: unexpected response code: 503
I1002 07:19:35.400833  581120 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ff0bad9-9e0d-4b80-ab6e-6599c46bef51] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:19:35 GMT]] Body:0xc00081c880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003c6000 TLS:<nil>}
I1002 07:19:35.400910  581120 retry.go:31] will retry after 1m5.190444211s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-365308 -n functional-365308
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 logs -n 25: (1.719400645s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start     │ -p functional-365308 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                          │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ start     │ -p functional-365308 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-365308 --alsologtostderr -v=1                                                                                               │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ ssh       │ functional-365308 ssh sudo systemctl is-active docker                                                                                                        │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │                     │
	│ ssh       │ functional-365308 ssh sudo systemctl is-active containerd                                                                                                    │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │                     │
	│ license   │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image load --daemon kicbase/echo-server:functional-365308 --alsologtostderr                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image load --daemon kicbase/echo-server:functional-365308 --alsologtostderr                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image load --daemon kicbase/echo-server:functional-365308 --alsologtostderr                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image save kicbase/echo-server:functional-365308 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image rm kicbase/echo-server:functional-365308 --alsologtostderr                                                                           │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image     │ functional-365308 image save --daemon kicbase/echo-server:functional-365308 --alsologtostderr                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh       │ functional-365308 ssh sudo cat /etc/ssl/certs/566080.pem                                                                                                     │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh       │ functional-365308 ssh sudo cat /usr/share/ca-certificates/566080.pem                                                                                         │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh       │ functional-365308 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh       │ functional-365308 ssh sudo cat /etc/ssl/certs/5660802.pem                                                                                                    │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh       │ functional-365308 ssh sudo cat /usr/share/ca-certificates/5660802.pem                                                                                        │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh       │ functional-365308 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh       │ functional-365308 ssh sudo cat /etc/test/nested/copy/566080/hosts                                                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │                     │
	└───────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:15:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:15:35.187582  581092 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:15:35.187821  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.187830  581092 out.go:374] Setting ErrFile to fd 2...
	I1002 07:15:35.187834  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.188070  581092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:15:35.188576  581092 out.go:368] Setting JSON to false
	I1002 07:15:35.189718  581092 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":50285,"bootTime":1759339050,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:15:35.189819  581092 start.go:140] virtualization: kvm guest
	I1002 07:15:35.191796  581092 out.go:179] * [functional-365308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:15:35.193503  581092 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:15:35.193556  581092 notify.go:220] Checking for updates...
	I1002 07:15:35.196373  581092 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:15:35.197849  581092 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 07:15:35.199369  581092 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 07:15:35.200924  581092 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:15:35.202196  581092 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:15:35.203955  581092 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:15:35.204459  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.204534  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.219264  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I1002 07:15:35.219851  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.220549  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.220575  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.220979  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.221194  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.221495  581092 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:15:35.221923  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.222012  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.236449  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I1002 07:15:35.236952  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.237483  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.237509  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.237857  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.238107  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.270766  581092 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 07:15:35.272053  581092 start.go:304] selected driver: kvm2
	I1002 07:15:35.272079  581092 start.go:924] validating driver "kvm2" against &{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.272230  581092 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:15:35.273221  581092 cni.go:84] Creating CNI manager for ""
	I1002 07:15:35.273276  581092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 07:15:35.273325  581092 start.go:348] cluster config:
	{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.274855  581092 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.334444430Z" level=debug msg="Pod sandbox \"c498d6faeadd9b93a715cb79db0fe192c4965749e38e8056111e883611273d59\" has work directory \"/var/lib/containers/storage/overlay-containers/c498d6faeadd9b93a715cb79db0fe192c4965749e38e8056111e883611273d59/userdata\"" file="storage/runtime.go:274"
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.335166379Z" level=debug msg="Pod sandbox \"c498d6faeadd9b93a715cb79db0fe192c4965749e38e8056111e883611273d59\" has run directory \"/var/run/containers/storage/overlay-containers/c498d6faeadd9b93a715cb79db0fe192c4965749e38e8056111e883611273d59/userdata\"" file="storage/runtime.go:284"
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.336385976Z" level=debug msg="Setting stage for resource k8s_mysql-5bb876957f-lcmlb_default_264f5bca-ab0e-4641-878b-d94257057caa_0 from sandbox storage creation to sandbox shm creation" file="resourcestore/resourcestore.go:227" id=98ca27c5-0175-4054-adb9-afc1a442f242 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.336584908Z" level=debug msg="Setting stage for resource k8s_mysql-5bb876957f-lcmlb_default_264f5bca-ab0e-4641-878b-d94257057caa_0 from sandbox shm creation to sandbox spec configuration" file="resourcestore/resourcestore.go:227" id=98ca27c5-0175-4054-adb9-afc1a442f242 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.337370022Z" level=debug msg="Setting stage for resource k8s_mysql-5bb876957f-lcmlb_default_264f5bca-ab0e-4641-878b-d94257057caa_0 from sandbox spec configuration to sandbox namespace creation" file="resourcestore/resourcestore.go:227" id=98ca27c5-0175-4054-adb9-afc1a442f242 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.337451863Z" level=debug msg="Calling pinns with [-d /var/run -f 18736839-9ac5-46a4-b88d-a1d44352de01 -s net.ipv4.ip_unprivileged_port_start=0 --ipc --net --uts]" file="nsmgr/nsmgr_linux.go:121"
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.341670163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a05204be-3fda-429b-9827-c29b2e6974ad name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.341791616Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a05204be-3fda-429b-9827-c29b2e6974ad name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.344776333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b879eac2-03ab-4c4f-be9c-06b539d04719 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.347515046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389636347290546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b879eac2-03ab-4c4f-be9c-06b539d04719 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.348851924Z" level=debug msg="Setting stage for resource k8s_mysql-5bb876957f-lcmlb_default_264f5bca-ab0e-4641-878b-d94257057caa_0 from sandbox namespace creation to sandbox network creation" file="resourcestore/resourcestore.go:227" id=98ca27c5-0175-4054-adb9-afc1a442f242 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.350108719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07d0d246-4842-4fac-9091-2fe759b34a56 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.350394253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07d0d246-4842-4fac-9091-2fe759b34a56 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.350967018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07d0d246-4842-4fac-9091-2fe759b34a56 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.352236985Z" level=info msg="Got pod network &{Name:mysql-5bb876957f-lcmlb Namespace:default ID:c498d6faeadd9b93a715cb79db0fe192c4965749e38e8056111e883611273d59 UID:264f5bca-ab0e-4641-878b-d94257057caa NetNS:/var/run/netns/18736839-9ac5-46a4-b88d-a1d44352de01 Networks:[] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod264f5bca-ab0e-4641-878b-d94257057caa PodAnnotations:0xc0004f1980}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.352377250Z" level=info msg="Adding pod default_mysql-5bb876957f-lcmlb to CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:556"
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.426634374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=962c4447-629e-4b09-90c1-d1602b90eb32 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.426775351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=962c4447-629e-4b09-90c1-d1602b90eb32 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.430011027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bf3cd6a-b449-4b69-b0ea-7ca907d70549 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.430567686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389636430546044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bf3cd6a-b449-4b69-b0ea-7ca907d70549 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.431994430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1de8fc63-49f9-49dd-9c68-333adfae2b96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.432211088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1de8fc63-49f9-49dd-9c68-333adfae2b96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.434337406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1de8fc63-49f9-49dd-9c68-333adfae2b96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.456527692Z" level=info msg="Got pod network &{Name:mysql-5bb876957f-lcmlb Namespace:default ID:c498d6faeadd9b93a715cb79db0fe192c4965749e38e8056111e883611273d59 UID:264f5bca-ab0e-4641-878b-d94257057caa NetNS:/var/run/netns/18736839-9ac5-46a4-b88d-a1d44352de01 Networks:[] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod264f5bca-ab0e-4641-878b-d94257057caa PodAnnotations:0xc0004f1980}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Oct 02 07:20:36 functional-365308 crio[5347]: time="2025-10-02 07:20:36.456903382Z" level=info msg="Checking pod default_mysql-5bb876957f-lcmlb for CNI network bridge (type=bridge)" file="ocicni/ocicni.go:695"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ad6e5cbe559e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   48cfbcc33d924       busybox-mount
	ed3d142f91fc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       3                   ab75d24301c8a       storage-provisioner
	f10482c188ce1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   2                   e49891059042a       coredns-66bc5c9577-dr2ch
	7fb38312e961b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   a6ca9343fea73       kube-apiserver-functional-365308
	c78daade2b80d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            2                   a54c64bfb97e5       kube-scheduler-functional-365308
	6357fc139b532       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   2                   18bbf066598c0       kube-controller-manager-functional-365308
	fa7569304fff0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      2                   0d710f08f4acb       etcd-functional-365308
	8c62a867dcb23       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                2                   544dcc6a7d080       kube-proxy-jxg4z
	654514232bd5e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   1                   01744b2533cef       coredns-66bc5c9577-dr2ch
	b8c09b58f74b3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                1                   7ce74d9036d63       kube-proxy-jxg4z
	2e56087626b48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       2                   b2f9ea0ec2efe       storage-provisioner
	dd663a3bafc3c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Exited              kube-controller-manager   1                   ed9a30ec8bcca       kube-controller-manager-functional-365308
	a5273276a834c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      1                   6ab97c6c555da       etcd-functional-365308
	21b23172820e8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            1                   c916958d08d1d       kube-scheduler-functional-365308
	
	
	==> coredns [654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43655 - 35197 "HINFO IN 6464527999215105262.1447493269828718080. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019873774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45177 - 47335 "HINFO IN 2462640784439700111.6184602421018081306. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.431628671s
	
	
	==> describe nodes <==
	Name:               functional-365308
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-365308
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-365308
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_12_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-365308
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:20:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    functional-365308
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 e775a27534b64fe09ad47371f450f25e
	  System UUID:                e775a275-34b6-4fe0-9ad4-7371f450f25e
	  Boot ID:                    b177903b-98b0-4cba-8131-16d298c21e83
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-l8hpp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  default                     hello-node-connect-7d85dfc575-dzdnf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  default                     mysql-5bb876957f-lcmlb                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    1s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 coredns-66bc5c9577-dr2ch                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m28s
	  kube-system                 etcd-functional-365308                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m33s
	  kube-system                 kube-apiserver-functional-365308              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-controller-manager-functional-365308     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-proxy-jxg4z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-scheduler-functional-365308              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-98ddz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7qqfn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m25s                  kube-proxy       
	  Normal  Starting                 6m40s                  kube-proxy       
	  Normal  Starting                 7m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m33s                  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m33s                  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s                  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m33s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m32s                  kubelet          Node functional-365308 status is now: NodeReady
	  Normal  RegisteredNode           8m29s                  node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  NodeHasNoDiskPressure    7m30s (x8 over 7m30s)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m30s (x8 over 7m30s)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m30s (x7 over 7m30s)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m23s                  node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  Starting                 6m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m46s (x8 over 6m46s)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s (x8 over 6m46s)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m46s (x7 over 6m46s)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m39s                  node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000075] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004143] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.199062] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.094650] kauditd_printk_skb: 102 callbacks suppressed
	[Oct 2 07:12] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.636584] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.449328] kauditd_printk_skb: 220 callbacks suppressed
	[  +8.887133] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 2 07:13] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.586313] kauditd_printk_skb: 155 callbacks suppressed
	[  +6.305623] kauditd_printk_skb: 131 callbacks suppressed
	[ +13.021871] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.033866] kauditd_printk_skb: 207 callbacks suppressed
	[  +5.701692] kauditd_printk_skb: 298 callbacks suppressed
	[Oct 2 07:14] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000746] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.355545] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 2 07:15] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.120989] kauditd_printk_skb: 31 callbacks suppressed
	[Oct 2 07:17] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67] <==
	{"level":"warn","ts":"2025-10-02T07:13:09.401291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.421763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.455645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.483691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.496396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.509787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.562568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60404","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:13:34.951811Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:13:34.951953Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	{"level":"error","ts":"2025-10-02T07:13:34.952055Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:35.036018Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-02T07:13:35.035951Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-10-02T07:13:35.036229Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9759e6b18ded37f5","current-leader-member-id":"9759e6b18ded37f5"}
	{"level":"info","ts":"2025-10-02T07:13:35.036310Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T07:13:35.036335Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036366Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036460Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036471Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036505Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036531Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036539Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040265Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"error","ts":"2025-10-02T07:13:35.040323Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040345Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"info","ts":"2025-10-02T07:13:35.040351Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	
	
	==> etcd [fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3] <==
	{"level":"warn","ts":"2025-10-02T07:13:53.529538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.555865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.576002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.605937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.606404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.619797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.633439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.661492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.670881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.683973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.700911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.720792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.729169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.741013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.758185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.778650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.797065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.808420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.823816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.849545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.863218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.880463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.887865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.897825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.951844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:20:36 up 9 min,  0 users,  load average: 0.35, 0.29, 0.18
	Linux functional-365308 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426] <==
	I1002 07:13:54.709575       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:13:54.709589       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:13:54.709594       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:13:54.709600       1 cache.go:39] Caches are synced for autoregister controller
	E1002 07:13:54.711620       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 07:13:54.712007       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:13:54.751895       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:13:54.751968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:13:54.759979       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:13:55.522212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:13:55.586546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:13:56.286550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:13:56.324336       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:13:56.355501       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:13:56.363404       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:13:58.032182       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:13:58.382007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:13:58.430133       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:14:13.159828       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.249.105"}
	I1002 07:14:18.720798       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.213.147"}
	I1002 07:14:18.786356       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.255.36"}
	I1002 07:15:36.247424       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:15:36.556928       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.63.200"}
	I1002 07:15:36.587799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.110.242"}
	I1002 07:20:35.901231       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.166.192"}
	
	
	==> kube-controller-manager [6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a] <==
	I1002 07:13:58.027514       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:13:58.027570       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:58.029299       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 07:13:58.030433       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:13:58.033851       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 07:13:58.033886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:13:58.033907       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:58.033930       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:13:58.035156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 07:13:58.046575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 07:13:58.061947       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:13:58.071455       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:13:58.072618       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:58.076259       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:58.076592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:58.078406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:13:58.078444       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E1002 07:15:36.382290       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.391320       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.396231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.400237       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.408508       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.411325       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.424345       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.431031       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4] <==
	I1002 07:13:13.747781       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 07:13:13.752049       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 07:13:13.753799       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:13.753817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:13:13.757257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:13:13.757415       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:13:13.759823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:13:13.765229       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:13:13.768404       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 07:13:13.771788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:13:13.772683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.777870       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:13.777883       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:13.777908       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 07:13:13.781388       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.781429       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:13:13.781437       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:13:13.781500       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 07:13:13.781652       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:13:13.782005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:13:13.782269       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-365308"
	I1002 07:13:13.782440       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:13:13.782628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 07:13:13.796106       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:13.800496       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed] <==
	I1002 07:13:56.196476       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:56.297286       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:56.298389       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:56.298939       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:56.378665       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:56.378805       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:56.378851       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:56.388885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:56.389271       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:56.389298       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:56.395235       1 config.go:200] "Starting service config controller"
	I1002 07:13:56.395246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:56.395308       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:56.395315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:56.395328       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:56.395332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:56.395951       1 config.go:309] "Starting node config controller"
	I1002 07:13:56.395995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:56.396012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:56.496073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:56.498921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:56.498946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc] <==
	I1002 07:13:11.437618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:11.541886       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:11.542318       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:11.542473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:11.631110       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:11.631212       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:11.631287       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:11.649035       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:11.650112       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:11.650245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:11.656638       1 config.go:200] "Starting service config controller"
	I1002 07:13:11.657046       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:11.657187       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:11.657276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:11.657306       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:11.657457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:11.657578       1 config.go:309] "Starting node config controller"
	I1002 07:13:11.657607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:11.757312       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:11.757366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:13:11.758052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:11.758405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52] <==
	I1002 07:13:09.712588       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:13:10.924661       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:10.924829       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:10.950803       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 07:13:10.950890       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 07:13:10.951166       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951176       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.951482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.956572       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:10.956634       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:11.053683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:11.055859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:11.055920       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:13:34.946896       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:13:34.958210       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:13:34.951293       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:34.958307       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1002 07:13:34.958336       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:13:34.958367       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b] <==
	I1002 07:13:53.387037       1 serving.go:386] Generated self-signed cert in-memory
	W1002 07:13:54.604000       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 07:13:54.604051       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 07:13:54.604062       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 07:13:54.604068       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 07:13:54.662454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:54.662496       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:54.669528       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:54.672053       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672091       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672109       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:54.772597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:19:42 functional-365308 kubelet[6124]: E1002 07:19:42.880414    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.669179    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/podbce43dc216bc0b32b8ca943b7b45044c/crio-c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7: Error finding container c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7: Status 404 returned error can't find the container with id c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.669921    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7e25ac55-0338-49de-8426-92f577e709ff/crio-01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85: Error finding container 01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85: Status 404 returned error can't find the container with id 01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.670344    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod04b832b4ca47e9448ee74c5301716261/crio-ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d: Error finding container ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d: Status 404 returned error can't find the container with id ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.670603    6124 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod7eb2aa26-c024-46b3-ba05-c75a03d6e0bc/crio-7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809: Error finding container 7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809: Status 404 returned error can't find the container with id 7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.670964    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod643bf1be4096ec113d17583729218a55/crio-6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272: Error finding container 6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272: Status 404 returned error can't find the container with id 6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.671340    6124 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod24fe67cc-1e8e-4172-8735-0823a4b4e86c/crio-b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1: Error finding container b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1: Status 404 returned error can't find the container with id b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.794040    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389590793193635  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.794112    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389590793193635  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:19:57 functional-365308 kubelet[6124]: E1002 07:19:57.565456    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:20:00 functional-365308 kubelet[6124]: E1002 07:20:00.797007    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389600796453637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:00 functional-365308 kubelet[6124]: E1002 07:20:00.797337    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389600796453637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:10 functional-365308 kubelet[6124]: E1002 07:20:10.799404    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389610798978146  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:10 functional-365308 kubelet[6124]: E1002 07:20:10.799450    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389610798978146  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.565965    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971086    6124 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971192    6124 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971361    6124 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-7qqfn_kubernetes-dashboard(d1677996-64fe-4690-93e5-3f89cb8daf89): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971395    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	Oct 02 07:20:20 functional-365308 kubelet[6124]: E1002 07:20:20.802382    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389620801759663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:20 functional-365308 kubelet[6124]: E1002 07:20:20.802424    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389620801759663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:25 functional-365308 kubelet[6124]: E1002 07:20:25.570036    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	Oct 02 07:20:30 functional-365308 kubelet[6124]: E1002 07:20:30.804647    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389630804332581  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Oct 02 07:20:30 functional-365308 kubelet[6124]: E1002 07:20:30.804870    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389630804332581  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Oct 02 07:20:36 functional-365308 kubelet[6124]: I1002 07:20:36.070012    6124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6j2\" (UniqueName: \"kubernetes.io/projected/264f5bca-ab0e-4641-878b-d94257057caa-kube-api-access-hm6j2\") pod \"mysql-5bb876957f-lcmlb\" (UID: \"264f5bca-ab0e-4641-878b-d94257057caa\") " pod="default/mysql-5bb876957f-lcmlb"
	
	
	==> storage-provisioner [2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47] <==
	I1002 07:13:11.241110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 07:13:11.259358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 07:13:11.259418       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 07:13:11.268920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:14.723250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:18.984104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:22.583256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:25.638850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.661791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.669613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.669764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 07:13:28.669910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	I1002 07:13:28.670856       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a387236-ee84-46b3-84ea-284c6b247438", APIVersion:"v1", ResourceVersion:"520", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f became leader
	W1002 07:13:28.675446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.685078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.769996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	W1002 07:13:30.688158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:30.693655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.698438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.708988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.712353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.722195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2] <==
	W1002 07:20:11.563962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:13.567253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:13.572400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:15.577484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:15.584289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:17.588778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:17.594518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:19.597869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:19.606842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:21.610404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:21.616321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:23.621207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:23.629606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:25.634998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:25.640654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:27.645950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:27.655156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:29.659231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:29.664488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:31.669771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:31.674156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:33.678885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:33.684447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:35.688526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:35.695353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
helpers_test.go:269: (dbg) Run:  kubectl --context functional-365308 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1 (125.801429ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:15:28 +0000
	      Finished:     Thu, 02 Oct 2025 07:15:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v5w4j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-v5w4j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-365308
	  Normal  Pulling    6m15s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.334s (1m5.616s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-l8hpp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw92f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mw92f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m19s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l8hpp to functional-365308
	  Warning  Failed     5m12s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m25s (x2 over 5m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m25s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m12s (x2 over 5m11s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m12s (x2 over 5m11s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    119s (x3 over 6m18s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-dzdnf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkvdb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vkvdb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m19s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dzdnf to functional-365308
	  Warning  Failed     5m48s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     55s (x3 over 5m48s)  kubelet            Error: ErrImagePull
	  Warning  Failed     55s (x2 over 4m9s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    25s (x4 over 5m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     25s (x4 over 5m47s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    14s (x4 over 6m18s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-lcmlb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:20:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hm6j2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hm6j2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/mysql-5bb876957f-lcmlb to functional-365308
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6pnz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-s6pnz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m13s                default-scheduler  Successfully assigned default/sp-pod to functional-365308
	  Warning  Failed     4m39s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     85s (x2 over 4m39s)  kubelet            Error: ErrImagePull
	  Warning  Failed     85s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    74s (x2 over 4m39s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     74s (x2 over 4m39s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    59s (x3 over 6m12s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-98ddz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7qqfn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-365308 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-365308 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-dzdnf" [4bbb74ce-b506-4082-9761-57f2cf22a125] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 07:24:19.05495166 +0000 UTC m=+1637.222930460
functional_test.go:1645: (dbg) Run:  kubectl --context functional-365308 describe po hello-node-connect-7d85dfc575-dzdnf -n default
functional_test.go:1645: (dbg) kubectl --context functional-365308 describe po hello-node-connect-7d85dfc575-dzdnf -n default:
Name:             hello-node-connect-7d85dfc575-dzdnf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-365308/192.168.39.84
Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkvdb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vkvdb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dzdnf to functional-365308
Warning  Failed     9m30s                  kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m37s (x2 over 7m51s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     90s (x4 over 9m30s)    kubelet            Error: ErrImagePull
Warning  Failed     90s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    21s (x9 over 9m29s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     21s (x9 over 9m29s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    8s (x5 over 10m)       kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-365308 logs hello-node-connect-7d85dfc575-dzdnf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-365308 logs hello-node-connect-7d85dfc575-dzdnf -n default: exit status 1 (91.546209ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dzdnf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-365308 logs hello-node-connect-7d85dfc575-dzdnf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-365308 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-dzdnf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-365308/192.168.39.84
Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkvdb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vkvdb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dzdnf to functional-365308
Warning  Failed     9m30s                  kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m37s (x2 over 7m51s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     90s (x4 over 9m30s)    kubelet            Error: ErrImagePull
Warning  Failed     90s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    21s (x9 over 9m29s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     21s (x9 over 9m29s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    8s (x5 over 10m)       kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-365308 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-365308 logs -l app=hello-node-connect: exit status 1 (77.766067ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-dzdnf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-365308 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-365308 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.213.147
IPs:                      10.111.213.147
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31057/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-365308 -n functional-365308
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 logs -n 25: (1.579375788s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image save kicbase/echo-server:functional-365308 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image rm kicbase/echo-server:functional-365308 --alsologtostderr                                                                           │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image save --daemon kicbase/echo-server:functional-365308 --alsologtostderr                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/566080.pem                                                                                                     │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /usr/share/ca-certificates/566080.pem                                                                                         │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/5660802.pem                                                                                                    │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /usr/share/ca-certificates/5660802.pem                                                                                        │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/test/nested/copy/566080/hosts                                                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format short --alsologtostderr                                                                                                  │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format yaml --alsologtostderr                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh pgrep buildkitd                                                                                                                        │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │                     │
	│ image          │ functional-365308 image build -t localhost/my-image:functional-365308 testdata/build --alsologtostderr                                                       │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls                                                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format json --alsologtostderr                                                                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format table --alsologtostderr                                                                                                  │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ update-context │ functional-365308 update-context --alsologtostderr -v=2                                                                                                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ update-context │ functional-365308 update-context --alsologtostderr -v=2                                                                                                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ update-context │ functional-365308 update-context --alsologtostderr -v=2                                                                                                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ service        │ functional-365308 service list                                                                                                                               │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:15:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:15:35.187582  581092 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:15:35.187821  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.187830  581092 out.go:374] Setting ErrFile to fd 2...
	I1002 07:15:35.187834  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.188070  581092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:15:35.188576  581092 out.go:368] Setting JSON to false
	I1002 07:15:35.189718  581092 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":50285,"bootTime":1759339050,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:15:35.189819  581092 start.go:140] virtualization: kvm guest
	I1002 07:15:35.191796  581092 out.go:179] * [functional-365308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:15:35.193503  581092 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:15:35.193556  581092 notify.go:220] Checking for updates...
	I1002 07:15:35.196373  581092 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:15:35.197849  581092 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 07:15:35.199369  581092 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 07:15:35.200924  581092 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:15:35.202196  581092 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:15:35.203955  581092 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:15:35.204459  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.204534  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.219264  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I1002 07:15:35.219851  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.220549  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.220575  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.220979  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.221194  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.221495  581092 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:15:35.221923  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.222012  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.236449  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I1002 07:15:35.236952  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.237483  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.237509  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.237857  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.238107  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.270766  581092 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 07:15:35.272053  581092 start.go:304] selected driver: kvm2
	I1002 07:15:35.272079  581092 start.go:924] validating driver "kvm2" against &{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.272230  581092 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:15:35.273221  581092 cni.go:84] Creating CNI manager for ""
	I1002 07:15:35.273276  581092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 07:15:35.273325  581092 start.go:348] cluster config:
	{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.274855  581092 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.191744377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389860191680904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e0f969b-1be0-47b0-9961-6040cc41d311 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.192324940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=093ff6bc-dcb9-44fc-ac88-62e26c16efd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.192505666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=093ff6bc-dcb9-44fc-ac88-62e26c16efd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.193396082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=093ff6bc-dcb9-44fc-ac88-62e26c16efd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.241822597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64b5e45f-be47-4d47-ae78-2b57f056191f name=/runtime.v1.RuntimeService/Version
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.241916643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64b5e45f-be47-4d47-ae78-2b57f056191f name=/runtime.v1.RuntimeService/Version
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.243420893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee151443-38ff-4e83-afcb-af2b541fe583 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.244188332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389860244165568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee151443-38ff-4e83-afcb-af2b541fe583 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.245048182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2026e8bd-fdbf-4256-bb61-2917a64b4c8c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.245104015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2026e8bd-fdbf-4256-bb61-2917a64b4c8c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.245379878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2026e8bd-fdbf-4256-bb61-2917a64b4c8c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.282817636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5a189ec-ca63-4a2d-a4cc-bc9b54deda11 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.282905584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5a189ec-ca63-4a2d-a4cc-bc9b54deda11 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.284087080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1536cde9-9a59-40ad-8dcf-b4a24f17d84b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.285472686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389860285451658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1536cde9-9a59-40ad-8dcf-b4a24f17d84b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.286214301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=350fab83-ead6-41fe-86c1-8605eac4efa8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.286281922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=350fab83-ead6-41fe-86c1-8605eac4efa8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.286528494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=350fab83-ead6-41fe-86c1-8605eac4efa8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.333398554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49f4f8be-aa0b-44d4-9158-861ad859843c name=/runtime.v1.RuntimeService/Version
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.333485390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49f4f8be-aa0b-44d4-9158-861ad859843c name=/runtime.v1.RuntimeService/Version
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.334995016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50de8846-ad46-4b71-acf8-3111698900d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.336208317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389860336184866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50de8846-ad46-4b71-acf8-3111698900d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.336858641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=269e9586-24a5-474b-a85f-b29ca6d93274 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.337159950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=269e9586-24a5-474b-a85f-b29ca6d93274 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:24:20 functional-365308 crio[5347]: time="2025-10-02 07:24:20.337470211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=269e9586-24a5-474b-a85f-b29ca6d93274 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ad6e5cbe559e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 minutes ago       Exited              mount-munger              0                   48cfbcc33d924       busybox-mount
	ed3d142f91fc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   ab75d24301c8a       storage-provisioner
	f10482c188ce1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   e49891059042a       coredns-66bc5c9577-dr2ch
	7fb38312e961b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   a6ca9343fea73       kube-apiserver-functional-365308
	c78daade2b80d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            2                   a54c64bfb97e5       kube-scheduler-functional-365308
	6357fc139b532       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   2                   18bbf066598c0       kube-controller-manager-functional-365308
	fa7569304fff0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      2                   0d710f08f4acb       etcd-functional-365308
	8c62a867dcb23       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                2                   544dcc6a7d080       kube-proxy-jxg4z
	654514232bd5e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   01744b2533cef       coredns-66bc5c9577-dr2ch
	b8c09b58f74b3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                1                   7ce74d9036d63       kube-proxy-jxg4z
	2e56087626b48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   b2f9ea0ec2efe       storage-provisioner
	dd663a3bafc3c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   1                   ed9a30ec8bcca       kube-controller-manager-functional-365308
	a5273276a834c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      1                   6ab97c6c555da       etcd-functional-365308
	21b23172820e8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            1                   c916958d08d1d       kube-scheduler-functional-365308
	
	
	==> coredns [654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43655 - 35197 "HINFO IN 6464527999215105262.1447493269828718080. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019873774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45177 - 47335 "HINFO IN 2462640784439700111.6184602421018081306. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.431628671s
	
	
	==> describe nodes <==
	Name:               functional-365308
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-365308
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-365308
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_12_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-365308
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:24:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:23:46 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:23:46 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:23:46 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:23:46 +0000   Thu, 02 Oct 2025 07:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    functional-365308
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 e775a27534b64fe09ad47371f450f25e
	  System UUID:                e775a275-34b6-4fe0-9ad4-7371f450f25e
	  Boot ID:                    b177903b-98b0-4cba-8131-16d298c21e83
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-l8hpp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-dzdnf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-lcmlb                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    3m45s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-dr2ch                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-365308                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-365308              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-365308     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jxg4z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-365308              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-98ddz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7qqfn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-365308 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	
	
	==> dmesg <==
	[  +0.004143] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.199062] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.094650] kauditd_printk_skb: 102 callbacks suppressed
	[Oct 2 07:12] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.636584] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.449328] kauditd_printk_skb: 220 callbacks suppressed
	[  +8.887133] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 2 07:13] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.586313] kauditd_printk_skb: 155 callbacks suppressed
	[  +6.305623] kauditd_printk_skb: 131 callbacks suppressed
	[ +13.021871] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.033866] kauditd_printk_skb: 207 callbacks suppressed
	[  +5.701692] kauditd_printk_skb: 298 callbacks suppressed
	[Oct 2 07:14] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000746] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.355545] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 2 07:15] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.120989] kauditd_printk_skb: 31 callbacks suppressed
	[Oct 2 07:17] kauditd_printk_skb: 74 callbacks suppressed
	[Oct 2 07:20] crun[9469]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +5.006843] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67] <==
	{"level":"warn","ts":"2025-10-02T07:13:09.401291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.421763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.455645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.483691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.496396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.509787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.562568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60404","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:13:34.951811Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:13:34.951953Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	{"level":"error","ts":"2025-10-02T07:13:34.952055Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:35.036018Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-02T07:13:35.035951Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-10-02T07:13:35.036229Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9759e6b18ded37f5","current-leader-member-id":"9759e6b18ded37f5"}
	{"level":"info","ts":"2025-10-02T07:13:35.036310Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T07:13:35.036335Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036366Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036460Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036471Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036505Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036531Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036539Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040265Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"error","ts":"2025-10-02T07:13:35.040323Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040345Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"info","ts":"2025-10-02T07:13:35.040351Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	
	
	==> etcd [fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3] <==
	{"level":"warn","ts":"2025-10-02T07:13:53.605937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.606404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.619797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.633439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.661492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.670881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.683973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.700911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.720792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.729169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.741013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.758185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.778650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.797065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.808420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.823816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.849545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.863218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.880463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.887865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.897825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.951844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:23:52.928093Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2025-10-02T07:23:52.954819Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1058,"took":"25.843294ms","hash":2224020673,"current-db-size-bytes":3448832,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-02T07:23:52.954861Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2224020673,"revision":1058,"compact-revision":-1}
	
	
	==> kernel <==
	 07:24:20 up 12 min,  0 users,  load average: 0.23, 0.31, 0.21
	Linux functional-365308 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426] <==
	I1002 07:13:54.709589       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:13:54.709594       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:13:54.709600       1 cache.go:39] Caches are synced for autoregister controller
	E1002 07:13:54.711620       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 07:13:54.712007       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:13:54.751895       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:13:54.751968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:13:54.759979       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:13:55.522212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:13:55.586546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:13:56.286550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:13:56.324336       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:13:56.355501       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:13:56.363404       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:13:58.032182       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:13:58.382007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:13:58.430133       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:14:13.159828       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.249.105"}
	I1002 07:14:18.720798       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.213.147"}
	I1002 07:14:18.786356       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.255.36"}
	I1002 07:15:36.247424       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:15:36.556928       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.63.200"}
	I1002 07:15:36.587799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.110.242"}
	I1002 07:20:35.901231       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.166.192"}
	I1002 07:23:54.652347       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a] <==
	I1002 07:13:58.027514       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:13:58.027570       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:58.029299       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 07:13:58.030433       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:13:58.033851       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 07:13:58.033886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:13:58.033907       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:58.033930       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:13:58.035156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 07:13:58.046575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 07:13:58.061947       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:13:58.071455       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:13:58.072618       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:58.076259       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:58.076592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:58.078406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:13:58.078444       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E1002 07:15:36.382290       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.391320       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.396231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.400237       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.408508       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.411325       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.424345       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.431031       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4] <==
	I1002 07:13:13.747781       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 07:13:13.752049       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 07:13:13.753799       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:13.753817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:13:13.757257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:13:13.757415       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:13:13.759823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:13:13.765229       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:13:13.768404       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 07:13:13.771788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:13:13.772683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.777870       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:13.777883       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:13.777908       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 07:13:13.781388       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.781429       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:13:13.781437       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:13:13.781500       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 07:13:13.781652       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:13:13.782005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:13:13.782269       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-365308"
	I1002 07:13:13.782440       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:13:13.782628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 07:13:13.796106       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:13.800496       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed] <==
	I1002 07:13:56.196476       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:56.297286       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:56.298389       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:56.298939       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:56.378665       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:56.378805       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:56.378851       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:56.388885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:56.389271       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:56.389298       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:56.395235       1 config.go:200] "Starting service config controller"
	I1002 07:13:56.395246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:56.395308       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:56.395315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:56.395328       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:56.395332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:56.395951       1 config.go:309] "Starting node config controller"
	I1002 07:13:56.395995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:56.396012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:56.496073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:56.498921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:56.498946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc] <==
	I1002 07:13:11.437618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:11.541886       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:11.542318       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:11.542473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:11.631110       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:11.631212       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:11.631287       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:11.649035       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:11.650112       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:11.650245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:11.656638       1 config.go:200] "Starting service config controller"
	I1002 07:13:11.657046       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:11.657187       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:11.657276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:11.657306       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:11.657457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:11.657578       1 config.go:309] "Starting node config controller"
	I1002 07:13:11.657607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:11.757312       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:11.757366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:13:11.758052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:11.758405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52] <==
	I1002 07:13:09.712588       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:13:10.924661       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:10.924829       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:10.950803       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 07:13:10.950890       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 07:13:10.951166       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951176       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.951482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.956572       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:10.956634       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:11.053683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:11.055859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:11.055920       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:13:34.946896       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:13:34.958210       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:13:34.951293       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:34.958307       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1002 07:13:34.958336       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:13:34.958367       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b] <==
	I1002 07:13:53.387037       1 serving.go:386] Generated self-signed cert in-memory
	W1002 07:13:54.604000       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 07:13:54.604051       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 07:13:54.604062       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 07:13:54.604068       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 07:13:54.662454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:54.662496       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:54.669528       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:54.672053       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672091       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672109       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:54.772597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:23:30 functional-365308 kubelet[6124]: E1002 07:23:30.851181    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389810850408814  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:23:40 functional-365308 kubelet[6124]: E1002 07:23:40.852637    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389820852180014  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:23:40 functional-365308 kubelet[6124]: E1002 07:23:40.852686    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389820852180014  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:23:43 functional-365308 kubelet[6124]: E1002 07:23:43.565164    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:23:49 functional-365308 kubelet[6124]: E1002 07:23:49.716214    6124 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:23:49 functional-365308 kubelet[6124]: E1002 07:23:49.716263    6124 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:23:49 functional-365308 kubelet[6124]: E1002 07:23:49.716486    6124 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-7qqfn_kubernetes-dashboard(d1677996-64fe-4690-93e5-3f89cb8daf89): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:23:49 functional-365308 kubelet[6124]: E1002 07:23:49.716525    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.669626    6124 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod7eb2aa26-c024-46b3-ba05-c75a03d6e0bc/crio-7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809: Error finding container 7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809: Status 404 returned error can't find the container with id 7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.669940    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7e25ac55-0338-49de-8426-92f577e709ff/crio-01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85: Error finding container 01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85: Status 404 returned error can't find the container with id 01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.670216    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod643bf1be4096ec113d17583729218a55/crio-6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272: Error finding container 6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272: Status 404 returned error can't find the container with id 6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.670550    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/podbce43dc216bc0b32b8ca943b7b45044c/crio-c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7: Error finding container c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7: Status 404 returned error can't find the container with id c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.670935    6124 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod24fe67cc-1e8e-4172-8735-0823a4b4e86c/crio-b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1: Error finding container b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1: Status 404 returned error can't find the container with id b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.671313    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod04b832b4ca47e9448ee74c5301716261/crio-ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d: Error finding container ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d: Status 404 returned error can't find the container with id ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.855274    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389830854803119  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:23:50 functional-365308 kubelet[6124]: E1002 07:23:50.855319    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389830854803119  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:23:58 functional-365308 kubelet[6124]: E1002 07:23:58.565175    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:24:00 functional-365308 kubelet[6124]: E1002 07:24:00.857877    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389840857152769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:24:00 functional-365308 kubelet[6124]: E1002 07:24:00.857998    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389840857152769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:24:01 functional-365308 kubelet[6124]: E1002 07:24:01.568086    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	Oct 02 07:24:10 functional-365308 kubelet[6124]: E1002 07:24:10.861826    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389850860264677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:24:10 functional-365308 kubelet[6124]: E1002 07:24:10.861850    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389850860264677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:24:15 functional-365308 kubelet[6124]: E1002 07:24:15.566255    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	Oct 02 07:24:20 functional-365308 kubelet[6124]: E1002 07:24:20.864403    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389860864145297  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:24:20 functional-365308 kubelet[6124]: E1002 07:24:20.864445    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389860864145297  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47] <==
	I1002 07:13:11.241110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 07:13:11.259358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 07:13:11.259418       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 07:13:11.268920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:14.723250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:18.984104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:22.583256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:25.638850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.661791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.669613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.669764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 07:13:28.669910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	I1002 07:13:28.670856       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a387236-ee84-46b3-84ea-284c6b247438", APIVersion:"v1", ResourceVersion:"520", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f became leader
	W1002 07:13:28.675446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.685078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.769996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	W1002 07:13:30.688158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:30.693655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.698438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.708988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.712353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.722195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2] <==
	W1002 07:23:56.846413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:23:58.850241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:23:58.858436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:00.862532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:00.870023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:02.874099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:02.879346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:04.882383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:04.886907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:06.889861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:06.900827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:08.904189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:08.912786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:10.916437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:10.922419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:12.926039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:12.935039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:14.938423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:14.945382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:16.950147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:16.955307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:18.959491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:18.965870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:20.970455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:20.978999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
helpers_test.go:269: (dbg) Run:  kubectl --context functional-365308 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1 (101.93213ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:15:28 +0000
	      Finished:     Thu, 02 Oct 2025 07:15:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v5w4j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-v5w4j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m59s  default-scheduler  Successfully assigned default/busybox-mount to functional-365308
	  Normal  Pulling    9m59s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.334s (1m5.616s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m53s  kubelet            Created container: mount-munger
	  Normal  Started    8m53s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-l8hpp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw92f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mw92f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l8hpp to functional-365308
	  Warning  Failed     8m56s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m2s (x3 over 8m56s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m2s (x2 over 6m9s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m25s (x5 over 8m55s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m25s (x5 over 8m55s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m13s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-dzdnf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkvdb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vkvdb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dzdnf to functional-365308
	  Warning  Failed     9m32s                  kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m39s (x2 over 7m53s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s (x4 over 9m32s)    kubelet            Error: ErrImagePull
	  Warning  Failed     92s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    23s (x9 over 9m31s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     23s (x9 over 9m31s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    10s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-lcmlb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:20:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hm6j2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hm6j2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m45s                default-scheduler  Successfully assigned default/mysql-5bb876957f-lcmlb to functional-365308
	  Warning  Failed     62s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     62s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    61s                  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     61s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    49s (x2 over 3m45s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6pnz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-s6pnz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m57s                  default-scheduler  Successfully assigned default/sp-pod to functional-365308
	  Warning  Failed     5m9s                   kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m32s (x2 over 8m23s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m32s (x3 over 8m23s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m6s (x4 over 8m23s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m6s (x4 over 8m23s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    112s (x4 over 9m56s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-98ddz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7qqfn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (369.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [24fe67cc-1e8e-4172-8735-0823a4b4e86c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003927279s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-365308 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-365308 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-365308 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-365308 apply -f testdata/storage-provisioner/pod.yaml
I1002 07:14:24.572059  566080 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bc3c0a6e-e214-4c78-aab8-7f5b6ad9c0a2] Pending
helpers_test.go:352: "sp-pod" [bc3c0a6e-e214-4c78-aab8-7f5b6ad9c0a2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-02 07:20:24.89212275 +0000 UTC m=+1403.060101539
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-365308 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-365308 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-365308/192.168.39.84
Start Time:       Thu, 02 Oct 2025 07:14:24 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6pnz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-s6pnz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-365308
Warning  Failed     4m26s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     72s (x2 over 4m26s)  kubelet            Error: ErrImagePull
Warning  Failed     72s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    61s (x2 over 4m26s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     61s (x2 over 4m26s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    46s (x3 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-365308 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-365308 logs sp-pod -n default: exit status 1 (75.975202ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-365308 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-365308 -n functional-365308
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 logs -n 25: (1.542823411s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-365308 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ mount     │ -p functional-365308 /tmp/TestFunctionalparallelMountCmdany-port302975222/001:/mount-9p --alsologtostderr -v=1                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh       │ functional-365308 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh       │ functional-365308 ssh -- ls -la /mount-9p                                                                                           │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh       │ functional-365308 ssh cat /mount-9p/test-1759389260561385241                                                                        │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh       │ functional-365308 ssh stat /mount-9p/created-by-test                                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ ssh       │ functional-365308 ssh stat /mount-9p/created-by-pod                                                                                 │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ ssh       │ functional-365308 ssh sudo umount -f /mount-9p                                                                                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ mount     │ -p functional-365308 /tmp/TestFunctionalparallelMountCmdspecific-port1103249159/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ ssh       │ functional-365308 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ ssh       │ functional-365308 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ ssh       │ functional-365308 ssh -- ls -la /mount-9p                                                                                           │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ ssh       │ functional-365308 ssh sudo umount -f /mount-9p                                                                                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ mount     │ -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount1 --alsologtostderr -v=1                  │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ mount     │ -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount2 --alsologtostderr -v=1                  │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ mount     │ -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount3 --alsologtostderr -v=1                  │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ ssh       │ functional-365308 ssh findmnt -T /mount1                                                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ ssh       │ functional-365308 ssh findmnt -T /mount1                                                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ ssh       │ functional-365308 ssh findmnt -T /mount2                                                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ ssh       │ functional-365308 ssh findmnt -T /mount3                                                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │ 02 Oct 25 07:15 UTC │
	│ mount     │ -p functional-365308 --kill=true                                                                                                    │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ start     │ -p functional-365308 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ start     │ -p functional-365308 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ start     │ -p functional-365308 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-365308 --alsologtostderr -v=1                                                                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:15 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:15:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:15:35.187582  581092 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:15:35.187821  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.187830  581092 out.go:374] Setting ErrFile to fd 2...
	I1002 07:15:35.187834  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.188070  581092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:15:35.188576  581092 out.go:368] Setting JSON to false
	I1002 07:15:35.189718  581092 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":50285,"bootTime":1759339050,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:15:35.189819  581092 start.go:140] virtualization: kvm guest
	I1002 07:15:35.191796  581092 out.go:179] * [functional-365308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:15:35.193503  581092 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:15:35.193556  581092 notify.go:220] Checking for updates...
	I1002 07:15:35.196373  581092 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:15:35.197849  581092 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 07:15:35.199369  581092 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 07:15:35.200924  581092 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:15:35.202196  581092 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:15:35.203955  581092 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:15:35.204459  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.204534  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.219264  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I1002 07:15:35.219851  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.220549  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.220575  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.220979  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.221194  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.221495  581092 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:15:35.221923  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.222012  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.236449  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I1002 07:15:35.236952  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.237483  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.237509  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.237857  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.238107  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.270766  581092 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 07:15:35.272053  581092 start.go:304] selected driver: kvm2
	I1002 07:15:35.272079  581092 start.go:924] validating driver "kvm2" against &{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.272230  581092 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:15:35.273221  581092 cni.go:84] Creating CNI manager for ""
	I1002 07:15:35.273276  581092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 07:15:35.273325  581092 start.go:348] cluster config:
	{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.274855  581092 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.789459836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bc4b586-1744-4ea9-8a26-dfdcafacad29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.790594194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bc4b586-1744-4ea9-8a26-dfdcafacad29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.812371368Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=0e8b8c5e-0e61-45b2-9968-6090ac87555b name=/runtime.v1.RuntimeService/Status
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.812439233Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0e8b8c5e-0e61-45b2-9968-6090ac87555b name=/runtime.v1.RuntimeService/Status
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.843237403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2386203-7283-440d-bbe4-0a1e778f497f name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.843409890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2386203-7283-440d-bbe4-0a1e778f497f name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.844694307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a7b53c6-5721-4467-9353-f498b3dd49dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.845923682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389625845901482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a7b53c6-5721-4467-9353-f498b3dd49dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.846513900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=118ec2d2-58e1-4397-be1f-f836c82c0652 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.846587853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=118ec2d2-58e1-4397-be1f-f836c82c0652 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.847005285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=118ec2d2-58e1-4397-be1f-f836c82c0652 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.883560838Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89419adb-7792-41ff-a44a-7279ba27322d name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.883649888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89419adb-7792-41ff-a44a-7279ba27322d name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.885601423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92c337f7-30e1-4020-bd97-58922bc4b0b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.886236149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389625886181882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92c337f7-30e1-4020-bd97-58922bc4b0b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.886772893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83753289-9e50-41de-8d79-c12b716cb78b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.886825370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83753289-9e50-41de-8d79-c12b716cb78b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.887100620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83753289-9e50-41de-8d79-c12b716cb78b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.934852735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8755131b-1baf-454a-8efd-51c9b8851fd3 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.935245965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8755131b-1baf-454a-8efd-51c9b8851fd3 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.936956751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8d9900e-cd06-46e5-996f-17b8698f9c47 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.937913862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759389625937888055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:167805,},InodesUsed:&UInt64Value{Value:82,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8d9900e-cd06-46e5-996f-17b8698f9c47 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.938592865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=619c9792-3562-4da1-be0b-e71b8785908f name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.938643402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=619c9792-3562-4da1-be0b-e71b8785908f name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:20:25 functional-365308 crio[5347]: time="2025-10-02 07:20:25.939657735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=619c9792-3562-4da1-be0b-e71b8785908f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ad6e5cbe559e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   48cfbcc33d924       busybox-mount
	ed3d142f91fc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       3                   ab75d24301c8a       storage-provisioner
	f10482c188ce1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   2                   e49891059042a       coredns-66bc5c9577-dr2ch
	7fb38312e961b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   a6ca9343fea73       kube-apiserver-functional-365308
	c78daade2b80d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            2                   a54c64bfb97e5       kube-scheduler-functional-365308
	6357fc139b532       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   2                   18bbf066598c0       kube-controller-manager-functional-365308
	fa7569304fff0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      2                   0d710f08f4acb       etcd-functional-365308
	8c62a867dcb23       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                2                   544dcc6a7d080       kube-proxy-jxg4z
	654514232bd5e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   1                   01744b2533cef       coredns-66bc5c9577-dr2ch
	b8c09b58f74b3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                1                   7ce74d9036d63       kube-proxy-jxg4z
	2e56087626b48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       2                   b2f9ea0ec2efe       storage-provisioner
	dd663a3bafc3c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Exited              kube-controller-manager   1                   ed9a30ec8bcca       kube-controller-manager-functional-365308
	a5273276a834c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      1                   6ab97c6c555da       etcd-functional-365308
	21b23172820e8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            1                   c916958d08d1d       kube-scheduler-functional-365308
	
	
	==> coredns [654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43655 - 35197 "HINFO IN 6464527999215105262.1447493269828718080. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019873774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45177 - 47335 "HINFO IN 2462640784439700111.6184602421018081306. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.431628671s
	
	
	==> describe nodes <==
	Name:               functional-365308
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-365308
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-365308
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_12_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-365308
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:19:20 +0000   Thu, 02 Oct 2025 07:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    functional-365308
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 e775a27534b64fe09ad47371f450f25e
	  System UUID:                e775a275-34b6-4fe0-9ad4-7371f450f25e
	  Boot ID:                    b177903b-98b0-4cba-8131-16d298c21e83
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-l8hpp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     hello-node-connect-7d85dfc575-dzdnf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-dr2ch                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m18s
	  kube-system                 etcd-functional-365308                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m23s
	  kube-system                 kube-apiserver-functional-365308              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-365308     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-proxy-jxg4z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-scheduler-functional-365308              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-98ddz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7qqfn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m15s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 7m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m23s                  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m23s                  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s                  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m23s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m22s                  kubelet          Node functional-365308 status is now: NodeReady
	  Normal  RegisteredNode           8m19s                  node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  NodeHasNoDiskPressure    7m20s (x8 over 7m20s)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m20s (x8 over 7m20s)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m20s (x7 over 7m20s)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m13s                  node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m36s (x8 over 6m36s)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x8 over 6m36s)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x7 over 6m36s)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m29s                  node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000075] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004143] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.199062] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.094650] kauditd_printk_skb: 102 callbacks suppressed
	[Oct 2 07:12] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.636584] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.449328] kauditd_printk_skb: 220 callbacks suppressed
	[  +8.887133] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 2 07:13] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.586313] kauditd_printk_skb: 155 callbacks suppressed
	[  +6.305623] kauditd_printk_skb: 131 callbacks suppressed
	[ +13.021871] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.033866] kauditd_printk_skb: 207 callbacks suppressed
	[  +5.701692] kauditd_printk_skb: 298 callbacks suppressed
	[Oct 2 07:14] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000746] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.355545] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 2 07:15] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.120989] kauditd_printk_skb: 31 callbacks suppressed
	[Oct 2 07:17] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67] <==
	{"level":"warn","ts":"2025-10-02T07:13:09.401291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.421763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.455645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.483691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.496396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.509787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.562568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60404","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:13:34.951811Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:13:34.951953Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	{"level":"error","ts":"2025-10-02T07:13:34.952055Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:35.036018Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-02T07:13:35.035951Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-10-02T07:13:35.036229Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9759e6b18ded37f5","current-leader-member-id":"9759e6b18ded37f5"}
	{"level":"info","ts":"2025-10-02T07:13:35.036310Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T07:13:35.036335Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036366Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036460Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036471Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036505Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036531Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036539Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040265Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"error","ts":"2025-10-02T07:13:35.040323Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040345Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"info","ts":"2025-10-02T07:13:35.040351Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	
	
	==> etcd [fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3] <==
	{"level":"warn","ts":"2025-10-02T07:13:53.529538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.555865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.576002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.605937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.606404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.619797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.633439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.661492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.670881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.683973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.700911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.720792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.729169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.741013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.758185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.778650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.797065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.808420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.823816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.849545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.863218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.880463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.887865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.897825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.951844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:20:26 up 8 min,  0 users,  load average: 0.32, 0.29, 0.18
	Linux functional-365308 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426] <==
	I1002 07:13:54.696940       1 policy_source.go:240] refreshing policies
	I1002 07:13:54.709575       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:13:54.709589       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:13:54.709594       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:13:54.709600       1 cache.go:39] Caches are synced for autoregister controller
	E1002 07:13:54.711620       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 07:13:54.712007       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:13:54.751895       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:13:54.751968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:13:54.759979       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:13:55.522212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:13:55.586546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:13:56.286550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:13:56.324336       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:13:56.355501       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:13:56.363404       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:13:58.032182       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:13:58.382007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:13:58.430133       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:14:13.159828       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.249.105"}
	I1002 07:14:18.720798       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.213.147"}
	I1002 07:14:18.786356       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.255.36"}
	I1002 07:15:36.247424       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:15:36.556928       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.63.200"}
	I1002 07:15:36.587799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.110.242"}
	
	
	==> kube-controller-manager [6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a] <==
	I1002 07:13:58.027514       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:13:58.027570       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:58.029299       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 07:13:58.030433       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:13:58.033851       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 07:13:58.033886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:13:58.033907       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:58.033930       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:13:58.035156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 07:13:58.046575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 07:13:58.061947       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:13:58.071455       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:13:58.072618       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:58.076259       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:58.076592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:58.078406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:13:58.078444       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E1002 07:15:36.382290       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.391320       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.396231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.400237       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.408508       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.411325       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.424345       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.431031       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4] <==
	I1002 07:13:13.747781       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 07:13:13.752049       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 07:13:13.753799       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:13.753817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:13:13.757257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:13:13.757415       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:13:13.759823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:13:13.765229       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:13:13.768404       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 07:13:13.771788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:13:13.772683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.777870       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:13.777883       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:13.777908       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 07:13:13.781388       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.781429       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:13:13.781437       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:13:13.781500       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 07:13:13.781652       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:13:13.782005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:13:13.782269       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-365308"
	I1002 07:13:13.782440       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:13:13.782628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 07:13:13.796106       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:13.800496       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed] <==
	I1002 07:13:56.196476       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:56.297286       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:56.298389       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:56.298939       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:56.378665       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:56.378805       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:56.378851       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:56.388885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:56.389271       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:56.389298       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:56.395235       1 config.go:200] "Starting service config controller"
	I1002 07:13:56.395246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:56.395308       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:56.395315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:56.395328       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:56.395332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:56.395951       1 config.go:309] "Starting node config controller"
	I1002 07:13:56.395995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:56.396012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:56.496073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:56.498921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:56.498946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc] <==
	I1002 07:13:11.437618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:11.541886       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:11.542318       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:11.542473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:11.631110       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:11.631212       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:11.631287       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:11.649035       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:11.650112       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:11.650245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:11.656638       1 config.go:200] "Starting service config controller"
	I1002 07:13:11.657046       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:11.657187       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:11.657276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:11.657306       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:11.657457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:11.657578       1 config.go:309] "Starting node config controller"
	I1002 07:13:11.657607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:11.757312       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:11.757366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:13:11.758052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:11.758405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52] <==
	I1002 07:13:09.712588       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:13:10.924661       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:10.924829       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:10.950803       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 07:13:10.950890       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 07:13:10.951166       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951176       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.951482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.956572       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:10.956634       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:11.053683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:11.055859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:11.055920       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:13:34.946896       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:13:34.958210       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:13:34.951293       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:34.958307       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1002 07:13:34.958336       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:13:34.958367       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b] <==
	I1002 07:13:53.387037       1 serving.go:386] Generated self-signed cert in-memory
	W1002 07:13:54.604000       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 07:13:54.604051       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 07:13:54.604062       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 07:13:54.604068       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 07:13:54.662454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:54.662496       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:54.669528       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:54.672053       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672091       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672109       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:54.772597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:19:42 functional-365308 kubelet[6124]: E1002 07:19:42.880060    6124 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:19:42 functional-365308 kubelet[6124]: E1002 07:19:42.880123    6124 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:19:42 functional-365308 kubelet[6124]: E1002 07:19:42.880367    6124 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-dzdnf_default(4bbb74ce-b506-4082-9761-57f2cf22a125): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:19:42 functional-365308 kubelet[6124]: E1002 07:19:42.880414    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.669179    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/podbce43dc216bc0b32b8ca943b7b45044c/crio-c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7: Error finding container c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7: Status 404 returned error can't find the container with id c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.669921    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7e25ac55-0338-49de-8426-92f577e709ff/crio-01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85: Error finding container 01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85: Status 404 returned error can't find the container with id 01744b2533cef3a7eb12e7634066cca8d95a0afbbe3c905c34d181adff498f85
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.670344    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod04b832b4ca47e9448ee74c5301716261/crio-ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d: Error finding container ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d: Status 404 returned error can't find the container with id ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.670603    6124 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod7eb2aa26-c024-46b3-ba05-c75a03d6e0bc/crio-7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809: Error finding container 7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809: Status 404 returned error can't find the container with id 7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.670964    6124 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod643bf1be4096ec113d17583729218a55/crio-6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272: Error finding container 6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272: Status 404 returned error can't find the container with id 6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.671340    6124 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod24fe67cc-1e8e-4172-8735-0823a4b4e86c/crio-b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1: Error finding container b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1: Status 404 returned error can't find the container with id b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.794040    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389590793193635  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:19:50 functional-365308 kubelet[6124]: E1002 07:19:50.794112    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389590793193635  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:19:57 functional-365308 kubelet[6124]: E1002 07:19:57.565456    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:20:00 functional-365308 kubelet[6124]: E1002 07:20:00.797007    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389600796453637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:00 functional-365308 kubelet[6124]: E1002 07:20:00.797337    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389600796453637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:10 functional-365308 kubelet[6124]: E1002 07:20:10.799404    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389610798978146  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:10 functional-365308 kubelet[6124]: E1002 07:20:10.799450    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389610798978146  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.565965    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-dzdnf" podUID="4bbb74ce-b506-4082-9761-57f2cf22a125"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971086    6124 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971192    6124 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971361    6124 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-7qqfn_kubernetes-dashboard(d1677996-64fe-4690-93e5-3f89cb8daf89): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:20:12 functional-365308 kubelet[6124]: E1002 07:20:12.971395    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	Oct 02 07:20:20 functional-365308 kubelet[6124]: E1002 07:20:20.802382    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759389620801759663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:20 functional-365308 kubelet[6124]: E1002 07:20:20.802424    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759389620801759663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:167805}  inodes_used:{value:82}}"
	Oct 02 07:20:25 functional-365308 kubelet[6124]: E1002 07:20:25.570036    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	
	
	==> storage-provisioner [2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47] <==
	I1002 07:13:11.241110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 07:13:11.259358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 07:13:11.259418       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 07:13:11.268920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:14.723250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:18.984104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:22.583256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:25.638850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.661791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.669613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.669764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 07:13:28.669910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	I1002 07:13:28.670856       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a387236-ee84-46b3-84ea-284c6b247438", APIVersion:"v1", ResourceVersion:"520", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f became leader
	W1002 07:13:28.675446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.685078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.769996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	W1002 07:13:30.688158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:30.693655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.698438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.708988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.712353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.722195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2] <==
	W1002 07:20:01.508843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:03.513334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:03.519766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:05.523099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:05.532272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:07.535808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:07.541285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:09.545293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:09.550512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:11.554258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:11.563962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:13.567253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:13.572400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:15.577484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:15.584289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:17.588778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:17.594518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:19.597869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:19.606842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:21.610404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:21.616321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:23.621207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:23.629606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:25.634998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:20:25.640654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
helpers_test.go:269: (dbg) Run:  kubectl --context functional-365308 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1 (96.10821ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:15:28 +0000
	      Finished:     Thu, 02 Oct 2025 07:15:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v5w4j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-v5w4j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m5s   default-scheduler  Successfully assigned default/busybox-mount to functional-365308
	  Normal  Pulling    6m5s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m59s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.334s (1m5.616s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m59s  kubelet            Created container: mount-munger
	  Normal  Started    4m59s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-l8hpp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw92f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mw92f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m8s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l8hpp to functional-365308
	  Warning  Failed     5m2s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m15s (x2 over 5m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m15s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m2s (x2 over 5m1s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m2s (x2 over 5m1s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    109s (x3 over 6m8s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-dzdnf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkvdb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vkvdb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m8s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dzdnf to functional-365308
	  Warning  Failed     5m38s                kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     45s (x3 over 5m38s)  kubelet            Error: ErrImagePull
	  Warning  Failed     45s (x2 over 3m59s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    15s (x4 over 5m37s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     15s (x4 over 5m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4s (x4 over 6m8s)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6pnz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-s6pnz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-365308
	  Warning  Failed     4m29s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     75s (x2 over 4m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     75s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    64s (x2 over 4m29s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     64s (x2 over 4m29s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    49s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-98ddz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7qqfn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (369.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-365308 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-lcmlb" [264f5bca-ab0e-4641-878b-d94257057caa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-02 07:30:36.247335418 +0000 UTC m=+2014.415314215
functional_test.go:1804: (dbg) Run:  kubectl --context functional-365308 describe po mysql-5bb876957f-lcmlb -n default
functional_test.go:1804: (dbg) kubectl --context functional-365308 describe po mysql-5bb876957f-lcmlb -n default:
Name:             mysql-5bb876957f-lcmlb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-365308/192.168.39.84
Start Time:       Thu, 02 Oct 2025 07:20:36 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hm6j2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hm6j2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-lcmlb to functional-365308
Warning  Failed     4m10s                kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     86s (x2 over 7m17s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     86s (x3 over 7m17s)  kubelet            Error: ErrImagePull
Normal   BackOff    46s (x5 over 7m16s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     46s (x5 over 7m16s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    32s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-365308 logs mysql-5bb876957f-lcmlb -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-365308 logs mysql-5bb876957f-lcmlb -n default: exit status 1 (77.83429ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-lcmlb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-365308 logs mysql-5bb876957f-lcmlb -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-365308 -n functional-365308
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 logs -n 25: (1.531413401s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-365308 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls                                                                                             │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image save --daemon kicbase/echo-server:functional-365308 --alsologtostderr                          │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/566080.pem                                                               │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /usr/share/ca-certificates/566080.pem                                                   │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/51391683.0                                                               │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/5660802.pem                                                              │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /usr/share/ca-certificates/5660802.pem                                                  │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                               │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh sudo cat /etc/test/nested/copy/566080/hosts                                                      │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format short --alsologtostderr                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format yaml --alsologtostderr                                                             │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ ssh            │ functional-365308 ssh pgrep buildkitd                                                                                  │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │                     │
	│ image          │ functional-365308 image build -t localhost/my-image:functional-365308 testdata/build --alsologtostderr                 │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls                                                                                             │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format json --alsologtostderr                                                             │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ image          │ functional-365308 image ls --format table --alsologtostderr                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ update-context │ functional-365308 update-context --alsologtostderr -v=2                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ update-context │ functional-365308 update-context --alsologtostderr -v=2                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ update-context │ functional-365308 update-context --alsologtostderr -v=2                                                                │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:20 UTC │ 02 Oct 25 07:20 UTC │
	│ service        │ functional-365308 service list                                                                                         │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │ 02 Oct 25 07:24 UTC │
	│ service        │ functional-365308 service list -o json                                                                                 │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │ 02 Oct 25 07:24 UTC │
	│ service        │ functional-365308 service --namespace=default --https --url hello-node                                                 │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │                     │
	│ service        │ functional-365308 service hello-node --url --format={{.IP}}                                                            │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │                     │
	│ service        │ functional-365308 service hello-node --url                                                                             │ functional-365308 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:15:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:15:35.187582  581092 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:15:35.187821  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.187830  581092 out.go:374] Setting ErrFile to fd 2...
	I1002 07:15:35.187834  581092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.188070  581092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:15:35.188576  581092 out.go:368] Setting JSON to false
	I1002 07:15:35.189718  581092 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":50285,"bootTime":1759339050,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:15:35.189819  581092 start.go:140] virtualization: kvm guest
	I1002 07:15:35.191796  581092 out.go:179] * [functional-365308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:15:35.193503  581092 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:15:35.193556  581092 notify.go:220] Checking for updates...
	I1002 07:15:35.196373  581092 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:15:35.197849  581092 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 07:15:35.199369  581092 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 07:15:35.200924  581092 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:15:35.202196  581092 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:15:35.203955  581092 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:15:35.204459  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.204534  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.219264  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I1002 07:15:35.219851  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.220549  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.220575  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.220979  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.221194  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.221495  581092 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:15:35.221923  581092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.222012  581092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.236449  581092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I1002 07:15:35.236952  581092 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.237483  581092 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.237509  581092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.237857  581092 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.238107  581092 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.270766  581092 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 07:15:35.272053  581092 start.go:304] selected driver: kvm2
	I1002 07:15:35.272079  581092 start.go:924] validating driver "kvm2" against &{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.272230  581092 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:15:35.273221  581092 cni.go:84] Creating CNI manager for ""
	I1002 07:15:35.273276  581092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 07:15:35.273325  581092 start.go:348] cluster config:
	{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.274855  581092 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.158935538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759390237158909868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=515f279a-5650-4fcb-9312-81987ac00a8a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.159524309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=572cb43d-76be-4d33-ac12-bf59b3cc7f01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.159598467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=572cb43d-76be-4d33-ac12-bf59b3cc7f01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.159902884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=572cb43d-76be-4d33-ac12-bf59b3cc7f01 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.205150875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f597365-61a5-4e4f-8c6c-b9ba2e6dab17 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.205274914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f597365-61a5-4e4f-8c6c-b9ba2e6dab17 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.206602297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83fd8157-5596-451f-a439-1492df6e8f50 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.208389847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759390237208363541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83fd8157-5596-451f-a439-1492df6e8f50 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.209079241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cbf3a46-aece-421d-a873-118bc2b204d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.209137934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cbf3a46-aece-421d-a873-118bc2b204d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.209468434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cbf3a46-aece-421d-a873-118bc2b204d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.248666226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=731906f9-e5f2-4a70-8e32-61b7ea432419 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.248786357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=731906f9-e5f2-4a70-8e32-61b7ea432419 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.250132045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fedd108a-b85e-42bc-8352-f6d589b9ddcb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.250841064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759390237250818268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fedd108a-b85e-42bc-8352-f6d589b9ddcb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.251401938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd626ea0-556e-4e9f-a755-1131edc53ddf name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.251653127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd626ea0-556e-4e9f-a755-1131edc53ddf name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.252329066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd626ea0-556e-4e9f-a755-1131edc53ddf name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.289948150Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de01d2e9-8b4a-453d-8653-01fded919c96 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.290032267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de01d2e9-8b4a-453d-8653-01fded919c96 name=/runtime.v1.RuntimeService/Version
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.292538069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51e11f24-94a6-45b7-9b3e-4260941e2290 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.293389447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759390237293365303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51e11f24-94a6-45b7-9b3e-4260941e2290 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.293968264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f98031f5-7642-4633-82ec-a02a09e47ed8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.294017660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f98031f5-7642-4633-82ec-a02a09e47ed8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 07:30:37 functional-365308 crio[5347]: time="2025-10-02 07:30:37.294677951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152,PodSandboxId:48cfbcc33d924b0a60bb6b7e459ef53dd6e4e8f8f63d4cc42115f5b7ee59c824,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1759389328053536558,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a35b485-9767-4ffe-854c-046f59e75070,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2,PodSandboxId:ab75d24301c8a68bf3deeb305963b45dd3ea1103ecd7b35ab88ecb551691feab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759389235883291388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76,PodSandboxId:e49891059042a2f9b17540c5a18c743678509eb29be3f6eb2265f0877855579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759389235862771266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"
},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426,PodSandboxId:a6ca9343fea73e18525595a30d4d198fc118c8b2de59d57cfe45ec00230808e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759389231416197711,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ea019518cb7c608509272dfc457404,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b,PodSandboxId:a54c64bfb97e51ec51d8ca456d4c64412111e383103dbc0b22c0709812144a8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965f
cf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759389231317262971,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b45044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a,PodSandboxId:18bbf066598c04fe3174b3d5e0c7e6012ae0341e29da26e40d60e02bf9135d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619
538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759389231237504936,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3,PodSandboxId:0d710f08f4acbb15144e7505bd67dd7790c5f9d71c72b2eed0101b11e43734ca,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759389231223208255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed,PodSandboxId:54
4dcc6a7d08031235aed5855810e9bce169ec6b4cefa9b41515f843fa9f999e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759389228622635180,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2,PodSandboxId:01744b2533cef3a7eb12e7634066cca8d95
a0afbbe3c905c34d181adff498f85,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759389191501686209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dr2ch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e25ac55-0338-49de-8426-92f577e709ff,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc,PodSandboxId:7ce74d9036d63a01d5fe0a36ec6581cfd9c8a450792f2b214e311a3d74037809,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759389191133467775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxg4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb2aa26-c024-46b3-ba05-c75a03d6e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47,PodSandboxId:b2f9ea0ec2efe453a6aa8c8e6744984dcd51068166c9150b032adfd3d8415fa1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1759389191099631260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24fe67cc-1e8e-4172-8735-0823a4b4e86c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4,PodSandboxId:ed9a30ec8bccad89fec5800f5be1505ecec4dbc225832c50f8d080c5b0dc725d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759389187310800810,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b832b4ca47e9448ee74c5301716261,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52,PodSandboxId:c916958d08d1dc486c55b2dd56bf6bbfbcca70beb9370a16de62e03f2b01c5c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759389187262558732,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-365308,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce43dc216bc0b32b8ca943b7b4
5044c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67,PodSandboxId:6ab97c6c555dabca2dcc28df4ab92e60f63eafe91903cc14959bb5712ee11272,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759389187288329656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-365308,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 643bf1be4096ec113d17583729218a55,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f98031f5-7642-4633-82ec-a02a09e47ed8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ad6e5cbe559e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   15 minutes ago      Exited              mount-munger              0                   48cfbcc33d924       busybox-mount
	ed3d142f91fc7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Running             storage-provisioner       3                   ab75d24301c8a       storage-provisioner
	f10482c188ce1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 minutes ago      Running             coredns                   2                   e49891059042a       coredns-66bc5c9577-dr2ch
	7fb38312e961b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      16 minutes ago      Running             kube-apiserver            0                   a6ca9343fea73       kube-apiserver-functional-365308
	c78daade2b80d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      16 minutes ago      Running             kube-scheduler            2                   a54c64bfb97e5       kube-scheduler-functional-365308
	6357fc139b532       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      16 minutes ago      Running             kube-controller-manager   2                   18bbf066598c0       kube-controller-manager-functional-365308
	fa7569304fff0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      16 minutes ago      Running             etcd                      2                   0d710f08f4acb       etcd-functional-365308
	8c62a867dcb23       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      16 minutes ago      Running             kube-proxy                2                   544dcc6a7d080       kube-proxy-jxg4z
	654514232bd5e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Exited              coredns                   1                   01744b2533cef       coredns-66bc5c9577-dr2ch
	b8c09b58f74b3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      17 minutes ago      Exited              kube-proxy                1                   7ce74d9036d63       kube-proxy-jxg4z
	2e56087626b48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       2                   b2f9ea0ec2efe       storage-provisioner
	dd663a3bafc3c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      17 minutes ago      Exited              kube-controller-manager   1                   ed9a30ec8bcca       kube-controller-manager-functional-365308
	a5273276a834c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      17 minutes ago      Exited              etcd                      1                   6ab97c6c555da       etcd-functional-365308
	21b23172820e8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      17 minutes ago      Exited              kube-scheduler            1                   c916958d08d1d       kube-scheduler-functional-365308
	
	
	==> coredns [654514232bd5e57d07822ef038d258e73f9f3d6efbc536d16f1ddfeb1e384af2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43655 - 35197 "HINFO IN 6464527999215105262.1447493269828718080. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019873774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10482c188ce1ac0aaefbb211232c989efb5b1417add42ecc898851827843f76] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45177 - 47335 "HINFO IN 2462640784439700111.6184602421018081306. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.431628671s
	
	
	==> describe nodes <==
	Name:               functional-365308
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-365308
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-365308
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_12_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:12:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-365308
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:30:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:28:51 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:28:51 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:28:51 +0000   Thu, 02 Oct 2025 07:11:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:28:51 +0000   Thu, 02 Oct 2025 07:12:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    functional-365308
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 e775a27534b64fe09ad47371f450f25e
	  System UUID:                e775a275-34b6-4fe0-9ad4-7371f450f25e
	  Boot ID:                    b177903b-98b0-4cba-8131-16d298c21e83
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-l8hpp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-dzdnf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     mysql-5bb876957f-lcmlb                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-dr2ch                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
	  kube-system                 etcd-functional-365308                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         18m
	  kube-system                 kube-apiserver-functional-365308              250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-365308     200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-jxg4z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-365308              100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-98ddz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7qqfn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeReady                18m                kubelet          Node functional-365308 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-365308 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-365308 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-365308 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node functional-365308 event: Registered Node functional-365308 in Controller
	
	
	==> dmesg <==
	[  +0.004143] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.199062] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.094650] kauditd_printk_skb: 102 callbacks suppressed
	[Oct 2 07:12] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.636584] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.449328] kauditd_printk_skb: 220 callbacks suppressed
	[  +8.887133] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 2 07:13] kauditd_printk_skb: 78 callbacks suppressed
	[  +4.586313] kauditd_printk_skb: 155 callbacks suppressed
	[  +6.305623] kauditd_printk_skb: 131 callbacks suppressed
	[ +13.021871] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.033866] kauditd_printk_skb: 207 callbacks suppressed
	[  +5.701692] kauditd_printk_skb: 298 callbacks suppressed
	[Oct 2 07:14] kauditd_printk_skb: 36 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000746] kauditd_printk_skb: 104 callbacks suppressed
	[ +25.355545] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 2 07:15] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.120989] kauditd_printk_skb: 31 callbacks suppressed
	[Oct 2 07:17] kauditd_printk_skb: 74 callbacks suppressed
	[Oct 2 07:20] crun[9469]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +5.006843] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [a5273276a834cea37658bac94dd6ddcaa59da6e292a96eb4f1202a0828a5dd67] <==
	{"level":"warn","ts":"2025-10-02T07:13:09.401291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.421763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.455645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.483691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.496396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.509787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:09.562568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60404","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:13:34.951811Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:13:34.951953Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	{"level":"error","ts":"2025-10-02T07:13:34.952055Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:35.036018Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-02T07:13:35.035951Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-10-02T07:13:35.036229Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9759e6b18ded37f5","current-leader-member-id":"9759e6b18ded37f5"}
	{"level":"info","ts":"2025-10-02T07:13:35.036310Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T07:13:35.036335Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036366Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036460Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036471Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036505Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:35.036531Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.84:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:35.036539Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040265Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"error","ts":"2025-10-02T07:13:35.040323Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.84:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:35.040345Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.84:2380"}
	{"level":"info","ts":"2025-10-02T07:13:35.040351Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-365308","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.84:2380"],"advertise-client-urls":["https://192.168.39.84:2379"]}
	
	
	==> etcd [fa7569304fff0fa45180314d1ebc793eb910db72cb46ea87ef1ed6814538a4a3] <==
	{"level":"warn","ts":"2025-10-02T07:13:53.633439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.661492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.670881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.683973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.700911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.720792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.729169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.741013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.758185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.778650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.797065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.808420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.823816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.849545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.863218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.880463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.887865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.897825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:53.951844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:23:52.928093Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2025-10-02T07:23:52.954819Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1058,"took":"25.843294ms","hash":2224020673,"current-db-size-bytes":3448832,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-02T07:23:52.954861Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2224020673,"revision":1058,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T07:28:52.935514Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1391}
	{"level":"info","ts":"2025-10-02T07:28:52.940010Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1391,"took":"3.868301ms","hash":4113695116,"current-db-size-bytes":3448832,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":2232320,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-02T07:28:52.940062Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4113695116,"revision":1391,"compact-revision":1058}
	
	
	==> kernel <==
	 07:30:37 up 19 min,  0 users,  load average: 0.13, 0.16, 0.17
	Linux functional-365308 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7fb38312e961b37d4e99e163b7038d49b0c48343f3ca453e83e549a99fd83426] <==
	I1002 07:13:54.709589       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:13:54.709594       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:13:54.709600       1 cache.go:39] Caches are synced for autoregister controller
	E1002 07:13:54.711620       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 07:13:54.712007       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:13:54.751895       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:13:54.751968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:13:54.759979       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:13:55.522212       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:13:55.586546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:13:56.286550       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:13:56.324336       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:13:56.355501       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:13:56.363404       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:13:58.032182       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:13:58.382007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:13:58.430133       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:14:13.159828       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.249.105"}
	I1002 07:14:18.720798       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.213.147"}
	I1002 07:14:18.786356       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.255.36"}
	I1002 07:15:36.247424       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:15:36.556928       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.63.200"}
	I1002 07:15:36.587799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.110.242"}
	I1002 07:20:35.901231       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.166.192"}
	I1002 07:23:54.652347       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6357fc139b532d660683440d54a2ca036f79db938a7eefbbfc908aeb09a0c51a] <==
	I1002 07:13:58.027514       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:13:58.027570       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:58.029299       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 07:13:58.030433       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:13:58.033851       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 07:13:58.033886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:13:58.033907       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:58.033930       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:13:58.035156       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 07:13:58.046575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 07:13:58.061947       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:13:58.071455       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:13:58.072618       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:58.076259       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:58.076592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:58.078406       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:13:58.078444       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E1002 07:15:36.382290       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.391320       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.396231       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.400237       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.408508       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.411325       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.424345       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:15:36.431031       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [dd663a3bafc3c4954156b30cf718430f5f1d77ae62e4befe39e092c84d88c7c4] <==
	I1002 07:13:13.747781       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 07:13:13.752049       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 07:13:13.753799       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:13:13.753817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:13:13.757257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:13:13.757415       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:13:13.759823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:13:13.765229       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:13:13.768404       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 07:13:13.771788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:13:13.772683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.777870       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:13:13.777883       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:13:13.777908       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 07:13:13.781388       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:13:13.781429       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:13:13.781437       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:13:13.781500       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 07:13:13.781652       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:13:13.782005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:13:13.782269       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-365308"
	I1002 07:13:13.782440       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:13:13.782628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 07:13:13.796106       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:13:13.800496       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [8c62a867dcb238484f9d95cb149635e527727ad7ce8af2ef2204734d40e50bed] <==
	I1002 07:13:56.196476       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:56.297286       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:56.298389       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:56.298939       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:56.378665       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:56.378805       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:56.378851       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:56.388885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:56.389271       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:56.389298       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:56.395235       1 config.go:200] "Starting service config controller"
	I1002 07:13:56.395246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:56.395308       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:56.395315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:56.395328       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:56.395332       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:56.395951       1 config.go:309] "Starting node config controller"
	I1002 07:13:56.395995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:56.396012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:56.496073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:56.498921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:56.498946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b8c09b58f74b350920bf39c73e5a06e6e9acd7f408931ba349a6098089abd6bc] <==
	I1002 07:13:11.437618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:11.541886       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:11.542318       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.84"]
	E1002 07:13:11.542473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:11.631110       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 07:13:11.631212       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 07:13:11.631287       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:11.649035       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:11.650112       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:11.650245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:11.656638       1 config.go:200] "Starting service config controller"
	I1002 07:13:11.657046       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:11.657187       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:11.657276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:11.657306       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:11.657457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:11.657578       1 config.go:309] "Starting node config controller"
	I1002 07:13:11.657607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:11.757312       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:11.757366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:13:11.758052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:11.758405       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [21b23172820e8b121cc7ba898a355718fbd7ab8b6ebd7e5ae81eeca0df52fd52] <==
	I1002 07:13:09.712588       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:13:10.924661       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:10.924829       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:10.950803       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 07:13:10.950890       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 07:13:10.951166       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951176       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:10.951193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.951482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:10.956572       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:10.956634       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:11.053683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:11.055859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:11.055920       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:13:34.946896       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:13:34.958210       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:13:34.951293       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:13:34.958307       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1002 07:13:34.958336       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:13:34.958367       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c78daade2b80dfaf79faf19665fde03f7bcc09406042a86cd3dd03d713407d9b] <==
	I1002 07:13:53.387037       1 serving.go:386] Generated self-signed cert in-memory
	W1002 07:13:54.604000       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 07:13:54.604051       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 07:13:54.604062       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 07:13:54.604068       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 07:13:54.662454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:13:54.662496       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:54.669528       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:13:54.672053       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672091       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:54.672109       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:13:54.772597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:29:50 functional-365308 kubelet[6124]: E1002 07:29:50.954050    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759390190953609431  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:29:50 functional-365308 kubelet[6124]: E1002 07:29:50.954093    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759390190953609431  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:29:55 functional-365308 kubelet[6124]: E1002 07:29:55.113562    6124 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 07:29:55 functional-365308 kubelet[6124]: E1002 07:29:55.113613    6124 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 07:29:55 functional-365308 kubelet[6124]: E1002 07:29:55.113833    6124 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(bc3c0a6e-e214-4c78-aab8-7f5b6ad9c0a2): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:29:55 functional-365308 kubelet[6124]: E1002 07:29:55.113866    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bc3c0a6e-e214-4c78-aab8-7f5b6ad9c0a2"
	Oct 02 07:29:57 functional-365308 kubelet[6124]: E1002 07:29:57.564472    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-l8hpp" podUID="735501ef-c068-4379-a210-d9607f551d53"
	Oct 02 07:30:00 functional-365308 kubelet[6124]: E1002 07:30:00.955541    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759390200955162077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:00 functional-365308 kubelet[6124]: E1002 07:30:00.955582    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759390200955162077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:08 functional-365308 kubelet[6124]: E1002 07:30:08.565429    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bc3c0a6e-e214-4c78-aab8-7f5b6ad9c0a2"
	Oct 02 07:30:10 functional-365308 kubelet[6124]: E1002 07:30:10.958153    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759390210957872144  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:10 functional-365308 kubelet[6124]: E1002 07:30:10.958174    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759390210957872144  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:11 functional-365308 kubelet[6124]: E1002 07:30:11.565411    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-l8hpp" podUID="735501ef-c068-4379-a210-d9607f551d53"
	Oct 02 07:30:20 functional-365308 kubelet[6124]: E1002 07:30:20.961211    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759390220960632578  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:20 functional-365308 kubelet[6124]: E1002 07:30:20.961234    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759390220960632578  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:22 functional-365308 kubelet[6124]: E1002 07:30:22.565352    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bc3c0a6e-e214-4c78-aab8-7f5b6ad9c0a2"
	Oct 02 07:30:23 functional-365308 kubelet[6124]: E1002 07:30:23.565488    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-l8hpp" podUID="735501ef-c068-4379-a210-d9607f551d53"
	Oct 02 07:30:25 functional-365308 kubelet[6124]: E1002 07:30:25.210803    6124 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:30:25 functional-365308 kubelet[6124]: E1002 07:30:25.210883    6124 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 07:30:25 functional-365308 kubelet[6124]: E1002 07:30:25.211117    6124 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-7qqfn_kubernetes-dashboard(d1677996-64fe-4690-93e5-3f89cb8daf89): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:30:25 functional-365308 kubelet[6124]: E1002 07:30:25.211151    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	Oct 02 07:30:30 functional-365308 kubelet[6124]: E1002 07:30:30.964928    6124 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759390230963577382  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:30 functional-365308 kubelet[6124]: E1002 07:30:30.964952    6124 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759390230963577382  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 02 07:30:33 functional-365308 kubelet[6124]: E1002 07:30:33.564776    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bc3c0a6e-e214-4c78-aab8-7f5b6ad9c0a2"
	Oct 02 07:30:36 functional-365308 kubelet[6124]: E1002 07:30:36.572432    6124 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7qqfn" podUID="d1677996-64fe-4690-93e5-3f89cb8daf89"
	
	
	==> storage-provisioner [2e56087626b4801aae86def489f0b93d35bdad4002b55912c4385e6ea98d9c47] <==
	I1002 07:13:11.241110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 07:13:11.259358       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 07:13:11.259418       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 07:13:11.268920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:14.723250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:18.984104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:22.583256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:25.638850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.661791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.669613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.669764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 07:13:28.669910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	I1002 07:13:28.670856       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a387236-ee84-46b3-84ea-284c6b247438", APIVersion:"v1", ResourceVersion:"520", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f became leader
	W1002 07:13:28.675446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:28.685078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 07:13:28.769996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-365308_53ef7ce2-bf61-4220-8e13-5149ec1ac85f!
	W1002 07:13:30.688158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:30.693655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.698438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:32.708988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.712353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:34.722195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ed3d142f91fc7f63475b9f4365b415dc4d43a678efa093da09edcbf5970a0af2] <==
	W1002 07:30:12.857558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:14.860599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:14.866478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:16.869428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:16.875406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:18.879872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:18.889388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:20.893265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:20.898384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:22.901762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:22.911225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:24.915130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:24.921084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:26.924536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:26.928994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:28.932255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:28.937625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:30.943008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:30.953646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:32.957599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:32.962976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:34.967115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:34.975141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:36.980019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:36.987649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
helpers_test.go:269: (dbg) Run:  kubectl --context functional-365308 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1 (111.683702ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5ad6e5cbe559e0e0e63d660eef2de7db54e55a43e7df3fb11b2e720c19231152
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:15:28 +0000
	      Finished:     Thu, 02 Oct 2025 07:15:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v5w4j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-v5w4j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16m   default-scheduler  Successfully assigned default/busybox-mount to functional-365308
	  Normal  Pulling    16m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.334s (1m5.616s including waiting). Image size: 4631262 bytes.
	  Normal  Created    15m   kubelet            Created container: mount-munger
	  Normal  Started    15m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-l8hpp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw92f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mw92f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l8hpp to functional-365308
	  Warning  Failed     15m                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m11s (x5 over 16m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     118s (x5 over 15m)   kubelet            Error: ErrImagePull
	  Warning  Failed     118s (x4 over 12m)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    0s (x20 over 15m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x20 over 15m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-dzdnf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkvdb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vkvdb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-dzdnf to functional-365308
	  Warning  Failed     15m                  kubelet            Failed to pull image "kicbase/echo-server": initializing source docker://kicbase/echo-server:latest: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m49s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m42s (x5 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m42s (x3 over 14m)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     118s (x17 over 15m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    73s (x20 over 15m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Normal   Pulling    62s (x6 over 16m)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-lcmlb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:20:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hm6j2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hm6j2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-lcmlb to functional-365308
	  Warning  Failed     4m12s                kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     88s (x2 over 7m19s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     88s (x3 over 7m19s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    48s (x5 over 7m18s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     48s (x5 over 7m18s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    34s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365308/192.168.39.84
	Start Time:       Thu, 02 Oct 2025 07:14:24 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6pnz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-s6pnz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/sp-pod to functional-365308
	  Warning  Failed     5m12s (x3 over 14m)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m40s (x5 over 16m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     43s (x5 over 14m)    kubelet            Error: ErrImagePull
	  Warning  Failed     43s (x2 over 11m)    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x13 over 14m)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     5s (x13 over 14m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-98ddz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7qqfn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-365308 describe pod busybox-mount hello-node-75c85bcc94-l8hpp hello-node-connect-7d85dfc575-dzdnf mysql-5bb876957f-lcmlb sp-pod dashboard-metrics-scraper-77bf4d6c4c-98ddz kubernetes-dashboard-855c9754f9-7qqfn: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-365308 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-365308 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-l8hpp" [735501ef-c068-4379-a210-d9607f551d53] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-365308 -n functional-365308
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-02 07:24:19.110880161 +0000 UTC m=+1637.278858959
functional_test.go:1460: (dbg) Run:  kubectl --context functional-365308 describe po hello-node-75c85bcc94-l8hpp -n default
functional_test.go:1460: (dbg) kubectl --context functional-365308 describe po hello-node-75c85bcc94-l8hpp -n default:
Name:             hello-node-75c85bcc94-l8hpp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-365308/192.168.39.84
Start Time:       Thu, 02 Oct 2025 07:14:18 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mw92f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mw92f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l8hpp to functional-365308
Warning  Failed     8m54s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m (x3 over 8m54s)     kubelet            Error: ErrImagePull
Warning  Failed     3m (x2 over 6m7s)      kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m23s (x5 over 8m53s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     2m23s (x5 over 8m53s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    2m11s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-365308 logs hello-node-75c85bcc94-l8hpp -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-365308 logs hello-node-75c85bcc94-l8hpp -n default: exit status 1 (68.989052ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-l8hpp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-365308 logs hello-node-75c85bcc94-l8hpp -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 service --namespace=default --https --url hello-node: exit status 115 (316.782314ms)

                                                
                                                
-- stdout --
	https://192.168.39.84:30566
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-365308 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 service hello-node --url --format={{.IP}}: exit status 115 (308.476575ms)

                                                
                                                
-- stdout --
	192.168.39.84
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-365308 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 service hello-node --url: exit status 115 (307.957982ms)

                                                
                                                
-- stdout --
	http://192.168.39.84:30566
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-365308 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.84:30566
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestPreload (163.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-977743 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1002 08:10:52.832441  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-977743 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m41.444999712s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-977743 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-977743 image pull gcr.io/k8s-minikube/busybox: (2.425478725s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-977743
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-977743: (8.643124606s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-977743 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-977743 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.875433526s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-977743 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-02 08:12:28.18783989 +0000 UTC m=+4526.355818687
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-977743 -n test-preload-977743
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-977743 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-977743 logs -n 25: (1.15332601s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-465973 ssh -n multinode-465973-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 07:59 UTC │
	│ ssh     │ multinode-465973 ssh -n multinode-465973 sudo cat /home/docker/cp-test_multinode-465973-m03_multinode-465973.txt                                                                    │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 07:59 UTC │
	│ cp      │ multinode-465973 cp multinode-465973-m03:/home/docker/cp-test.txt multinode-465973-m02:/home/docker/cp-test_multinode-465973-m03_multinode-465973-m02.txt                           │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 07:59 UTC │
	│ ssh     │ multinode-465973 ssh -n multinode-465973-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 07:59 UTC │
	│ ssh     │ multinode-465973 ssh -n multinode-465973-m02 sudo cat /home/docker/cp-test_multinode-465973-m03_multinode-465973-m02.txt                                                            │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 07:59 UTC │
	│ node    │ multinode-465973 node stop m03                                                                                                                                                      │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 07:59 UTC │
	│ node    │ multinode-465973 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 07:59 UTC │
	│ node    │ list -p multinode-465973                                                                                                                                                            │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │                     │
	│ stop    │ -p multinode-465973                                                                                                                                                                 │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 08:02 UTC │
	│ start   │ -p multinode-465973 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:04 UTC │
	│ node    │ list -p multinode-465973                                                                                                                                                            │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ node    │ multinode-465973 node delete m03                                                                                                                                                    │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ stop    │ multinode-465973 stop                                                                                                                                                               │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:07 UTC │
	│ start   │ -p multinode-465973 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:07 UTC │ 02 Oct 25 08:08 UTC │
	│ node    │ list -p multinode-465973                                                                                                                                                            │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:08 UTC │                     │
	│ start   │ -p multinode-465973-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-465973-m02 │ jenkins │ v1.37.0 │ 02 Oct 25 08:08 UTC │                     │
	│ start   │ -p multinode-465973-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-465973-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 08:08 UTC │ 02 Oct 25 08:09 UTC │
	│ node    │ add -p multinode-465973                                                                                                                                                             │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:09 UTC │                     │
	│ delete  │ -p multinode-465973-m03                                                                                                                                                             │ multinode-465973-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 08:09 UTC │ 02 Oct 25 08:09 UTC │
	│ delete  │ -p multinode-465973                                                                                                                                                                 │ multinode-465973     │ jenkins │ v1.37.0 │ 02 Oct 25 08:09 UTC │ 02 Oct 25 08:09 UTC │
	│ start   │ -p test-preload-977743 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-977743  │ jenkins │ v1.37.0 │ 02 Oct 25 08:09 UTC │ 02 Oct 25 08:11 UTC │
	│ image   │ test-preload-977743 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-977743  │ jenkins │ v1.37.0 │ 02 Oct 25 08:11 UTC │ 02 Oct 25 08:11 UTC │
	│ stop    │ -p test-preload-977743                                                                                                                                                              │ test-preload-977743  │ jenkins │ v1.37.0 │ 02 Oct 25 08:11 UTC │ 02 Oct 25 08:11 UTC │
	│ start   │ -p test-preload-977743 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-977743  │ jenkins │ v1.37.0 │ 02 Oct 25 08:11 UTC │ 02 Oct 25 08:12 UTC │
	│ image   │ test-preload-977743 image list                                                                                                                                                      │ test-preload-977743  │ jenkins │ v1.37.0 │ 02 Oct 25 08:12 UTC │ 02 Oct 25 08:12 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:11:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:11:40.124511  607622 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:11:40.124817  607622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:11:40.124828  607622 out.go:374] Setting ErrFile to fd 2...
	I1002 08:11:40.124832  607622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:11:40.125074  607622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 08:11:40.125574  607622 out.go:368] Setting JSON to false
	I1002 08:11:40.126512  607622 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":53650,"bootTime":1759339050,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 08:11:40.126611  607622 start.go:140] virtualization: kvm guest
	I1002 08:11:40.128920  607622 out.go:179] * [test-preload-977743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 08:11:40.130446  607622 notify.go:220] Checking for updates...
	I1002 08:11:40.130504  607622 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:11:40.131984  607622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:11:40.133491  607622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 08:11:40.135088  607622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 08:11:40.136749  607622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 08:11:40.138174  607622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:11:40.140248  607622 config.go:182] Loaded profile config "test-preload-977743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 08:11:40.140859  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:11:40.140960  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:11:40.155088  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I1002 08:11:40.155752  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:11:40.156360  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:11:40.156412  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:11:40.157012  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:11:40.157256  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:11:40.159207  607622 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 08:11:40.161240  607622 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:11:40.161545  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:11:40.161584  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:11:40.175842  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44339
	I1002 08:11:40.176339  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:11:40.176865  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:11:40.176897  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:11:40.177297  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:11:40.177551  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:11:40.213610  607622 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 08:11:40.215107  607622 start.go:304] selected driver: kvm2
	I1002 08:11:40.215131  607622 start.go:924] validating driver "kvm2" against &{Name:test-preload-977743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-977743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:11:40.215287  607622 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:11:40.216195  607622 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:11:40.216291  607622 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 08:11:40.231563  607622 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 08:11:40.231613  607622 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 08:11:40.246589  607622 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 08:11:40.246968  607622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:11:40.247018  607622 cni.go:84] Creating CNI manager for ""
	I1002 08:11:40.247063  607622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 08:11:40.247122  607622 start.go:348] cluster config:
	{Name:test-preload-977743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-977743 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:11:40.247265  607622 iso.go:125] acquiring lock: {Name:mkf098c9edb59acf17bed04e42333d4ed092b943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:11:40.249216  607622 out.go:179] * Starting "test-preload-977743" primary control-plane node in "test-preload-977743" cluster
	I1002 08:11:40.250466  607622 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 08:11:40.268360  607622 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1002 08:11:40.268397  607622 cache.go:58] Caching tarball of preloaded images
	I1002 08:11:40.268577  607622 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 08:11:40.270465  607622 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1002 08:11:40.271812  607622 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 08:11:40.302705  607622 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1002 08:11:40.302758  607622 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1002 08:11:42.551392  607622 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1002 08:11:42.551589  607622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/config.json ...
	I1002 08:11:42.551875  607622 start.go:360] acquireMachinesLock for test-preload-977743: {Name:mk200887a2360c0adfa27edc65d8cb08bb2838a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 08:11:42.551977  607622 start.go:364] duration metric: took 73.394µs to acquireMachinesLock for "test-preload-977743"
	I1002 08:11:42.552003  607622 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:11:42.552010  607622 fix.go:54] fixHost starting: 
	I1002 08:11:42.552382  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:11:42.552435  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:11:42.566485  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I1002 08:11:42.566990  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:11:42.567465  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:11:42.567492  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:11:42.567912  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:11:42.568175  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:11:42.568349  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetState
	I1002 08:11:42.570769  607622 fix.go:112] recreateIfNeeded on test-preload-977743: state=Stopped err=<nil>
	I1002 08:11:42.570807  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	W1002 08:11:42.571006  607622 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 08:11:42.573168  607622 out.go:252] * Restarting existing kvm2 VM for "test-preload-977743" ...
	I1002 08:11:42.573202  607622 main.go:141] libmachine: (test-preload-977743) Calling .Start
	I1002 08:11:42.573400  607622 main.go:141] libmachine: (test-preload-977743) starting domain...
	I1002 08:11:42.573413  607622 main.go:141] libmachine: (test-preload-977743) ensuring networks are active...
	I1002 08:11:42.574301  607622 main.go:141] libmachine: (test-preload-977743) Ensuring network default is active
	I1002 08:11:42.574771  607622 main.go:141] libmachine: (test-preload-977743) Ensuring network mk-test-preload-977743 is active
	I1002 08:11:42.575281  607622 main.go:141] libmachine: (test-preload-977743) getting domain XML...
	I1002 08:11:42.576560  607622 main.go:141] libmachine: (test-preload-977743) DBG | starting domain XML:
	I1002 08:11:42.576586  607622 main.go:141] libmachine: (test-preload-977743) DBG | <domain type='kvm'>
	I1002 08:11:42.576597  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <name>test-preload-977743</name>
	I1002 08:11:42.576607  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <uuid>4d2e4402-d801-4631-b0af-c3c2bb43586b</uuid>
	I1002 08:11:42.576621  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <memory unit='KiB'>3145728</memory>
	I1002 08:11:42.576630  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1002 08:11:42.576643  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 08:11:42.576653  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <os>
	I1002 08:11:42.576675  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 08:11:42.576701  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <boot dev='cdrom'/>
	I1002 08:11:42.576711  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <boot dev='hd'/>
	I1002 08:11:42.576726  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <bootmenu enable='no'/>
	I1002 08:11:42.576737  607622 main.go:141] libmachine: (test-preload-977743) DBG |   </os>
	I1002 08:11:42.576745  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <features>
	I1002 08:11:42.576750  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <acpi/>
	I1002 08:11:42.576756  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <apic/>
	I1002 08:11:42.576762  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <pae/>
	I1002 08:11:42.576766  607622 main.go:141] libmachine: (test-preload-977743) DBG |   </features>
	I1002 08:11:42.576772  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 08:11:42.576778  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <clock offset='utc'/>
	I1002 08:11:42.576787  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 08:11:42.576795  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <on_reboot>restart</on_reboot>
	I1002 08:11:42.576839  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <on_crash>destroy</on_crash>
	I1002 08:11:42.576868  607622 main.go:141] libmachine: (test-preload-977743) DBG |   <devices>
	I1002 08:11:42.576883  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 08:11:42.576896  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <disk type='file' device='cdrom'>
	I1002 08:11:42.576907  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <driver name='qemu' type='raw'/>
	I1002 08:11:42.576922  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/boot2docker.iso'/>
	I1002 08:11:42.576939  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 08:11:42.576955  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <readonly/>
	I1002 08:11:42.576984  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 08:11:42.577003  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </disk>
	I1002 08:11:42.577013  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <disk type='file' device='disk'>
	I1002 08:11:42.577027  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 08:11:42.577042  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <source file='/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/test-preload-977743.rawdisk'/>
	I1002 08:11:42.577053  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <target dev='hda' bus='virtio'/>
	I1002 08:11:42.577066  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 08:11:42.577077  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </disk>
	I1002 08:11:42.577087  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 08:11:42.577103  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 08:11:42.577115  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </controller>
	I1002 08:11:42.577124  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 08:11:42.577132  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 08:11:42.577165  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 08:11:42.577179  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </controller>
	I1002 08:11:42.577190  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <interface type='network'>
	I1002 08:11:42.577202  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <mac address='52:54:00:ba:d2:4a'/>
	I1002 08:11:42.577211  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <source network='mk-test-preload-977743'/>
	I1002 08:11:42.577222  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <model type='virtio'/>
	I1002 08:11:42.577234  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 08:11:42.577247  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </interface>
	I1002 08:11:42.577258  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <interface type='network'>
	I1002 08:11:42.577268  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <mac address='52:54:00:16:d9:a1'/>
	I1002 08:11:42.577279  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <source network='default'/>
	I1002 08:11:42.577291  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <model type='virtio'/>
	I1002 08:11:42.577307  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 08:11:42.577316  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </interface>
	I1002 08:11:42.577323  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <serial type='pty'>
	I1002 08:11:42.577336  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <target type='isa-serial' port='0'>
	I1002 08:11:42.577344  607622 main.go:141] libmachine: (test-preload-977743) DBG |         <model name='isa-serial'/>
	I1002 08:11:42.577356  607622 main.go:141] libmachine: (test-preload-977743) DBG |       </target>
	I1002 08:11:42.577364  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </serial>
	I1002 08:11:42.577373  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <console type='pty'>
	I1002 08:11:42.577388  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <target type='serial' port='0'/>
	I1002 08:11:42.577400  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </console>
	I1002 08:11:42.577407  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <input type='mouse' bus='ps2'/>
	I1002 08:11:42.577412  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 08:11:42.577423  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <audio id='1' type='none'/>
	I1002 08:11:42.577436  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <memballoon model='virtio'>
	I1002 08:11:42.577446  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 08:11:42.577466  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </memballoon>
	I1002 08:11:42.577482  607622 main.go:141] libmachine: (test-preload-977743) DBG |     <rng model='virtio'>
	I1002 08:11:42.577502  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <backend model='random'>/dev/random</backend>
	I1002 08:11:42.577532  607622 main.go:141] libmachine: (test-preload-977743) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 08:11:42.577541  607622 main.go:141] libmachine: (test-preload-977743) DBG |     </rng>
	I1002 08:11:42.577551  607622 main.go:141] libmachine: (test-preload-977743) DBG |   </devices>
	I1002 08:11:42.577560  607622 main.go:141] libmachine: (test-preload-977743) DBG | </domain>
	I1002 08:11:42.577569  607622 main.go:141] libmachine: (test-preload-977743) DBG | 
	I1002 08:11:43.857442  607622 main.go:141] libmachine: (test-preload-977743) waiting for domain to start...
	I1002 08:11:43.858980  607622 main.go:141] libmachine: (test-preload-977743) domain is now running
	I1002 08:11:43.859007  607622 main.go:141] libmachine: (test-preload-977743) waiting for IP...
	I1002 08:11:43.859897  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:43.860556  607622 main.go:141] libmachine: (test-preload-977743) found domain IP: 192.168.39.42
	I1002 08:11:43.860592  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has current primary IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:43.860603  607622 main.go:141] libmachine: (test-preload-977743) reserving static IP address...
	I1002 08:11:43.861094  607622 main.go:141] libmachine: (test-preload-977743) reserved static IP address 192.168.39.42 for domain test-preload-977743
	I1002 08:11:43.861155  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "test-preload-977743", mac: "52:54:00:ba:d2:4a", ip: "192.168.39.42"} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:10:03 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:43.861209  607622 main.go:141] libmachine: (test-preload-977743) DBG | skip adding static IP to network mk-test-preload-977743 - found existing host DHCP lease matching {name: "test-preload-977743", mac: "52:54:00:ba:d2:4a", ip: "192.168.39.42"}
	I1002 08:11:43.861230  607622 main.go:141] libmachine: (test-preload-977743) waiting for SSH...
	I1002 08:11:43.861243  607622 main.go:141] libmachine: (test-preload-977743) DBG | Getting to WaitForSSH function...
	I1002 08:11:43.863736  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:43.864176  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:10:03 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:43.864208  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:43.864405  607622 main.go:141] libmachine: (test-preload-977743) DBG | Using SSH client type: external
	I1002 08:11:43.864443  607622 main.go:141] libmachine: (test-preload-977743) DBG | Using SSH private key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa (-rw-------)
	I1002 08:11:43.864478  607622 main.go:141] libmachine: (test-preload-977743) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 08:11:43.864493  607622 main.go:141] libmachine: (test-preload-977743) DBG | About to run SSH command:
	I1002 08:11:43.864505  607622 main.go:141] libmachine: (test-preload-977743) DBG | exit 0
	I1002 08:11:55.157848  607622 main.go:141] libmachine: (test-preload-977743) DBG | SSH cmd err, output: exit status 255: 
	I1002 08:11:55.157882  607622 main.go:141] libmachine: (test-preload-977743) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1002 08:11:55.157893  607622 main.go:141] libmachine: (test-preload-977743) DBG | command : exit 0
	I1002 08:11:55.157898  607622 main.go:141] libmachine: (test-preload-977743) DBG | err     : exit status 255
	I1002 08:11:55.157908  607622 main.go:141] libmachine: (test-preload-977743) DBG | output  : 
	I1002 08:11:58.158663  607622 main.go:141] libmachine: (test-preload-977743) DBG | Getting to WaitForSSH function...
	I1002 08:11:58.162581  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.163119  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:58.163168  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.163368  607622 main.go:141] libmachine: (test-preload-977743) DBG | Using SSH client type: external
	I1002 08:11:58.163395  607622 main.go:141] libmachine: (test-preload-977743) DBG | Using SSH private key: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa (-rw-------)
	I1002 08:11:58.163423  607622 main.go:141] libmachine: (test-preload-977743) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 08:11:58.163436  607622 main.go:141] libmachine: (test-preload-977743) DBG | About to run SSH command:
	I1002 08:11:58.163464  607622 main.go:141] libmachine: (test-preload-977743) DBG | exit 0
	I1002 08:11:58.296386  607622 main.go:141] libmachine: (test-preload-977743) DBG | SSH cmd err, output: <nil>: 
	I1002 08:11:58.296802  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetConfigRaw
	I1002 08:11:58.297517  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetIP
	I1002 08:11:58.300635  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.301054  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:58.301082  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.301390  607622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/config.json ...
	I1002 08:11:58.301599  607622 machine.go:93] provisionDockerMachine start ...
	I1002 08:11:58.301620  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:11:58.301848  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:58.304382  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.304790  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:58.304816  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.304968  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:11:58.305158  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:58.305315  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:58.305432  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:11:58.305558  607622 main.go:141] libmachine: Using SSH client type: native
	I1002 08:11:58.305889  607622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1002 08:11:58.305905  607622 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:11:58.405187  607622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 08:11:58.405216  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetMachineName
	I1002 08:11:58.405508  607622 buildroot.go:166] provisioning hostname "test-preload-977743"
	I1002 08:11:58.405536  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetMachineName
	I1002 08:11:58.405746  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:58.409067  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.409531  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:58.409561  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.409736  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:11:58.409905  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:58.410025  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:58.410131  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:11:58.410290  607622 main.go:141] libmachine: Using SSH client type: native
	I1002 08:11:58.410578  607622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1002 08:11:58.410595  607622 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-977743 && echo "test-preload-977743" | sudo tee /etc/hostname
	I1002 08:11:58.531082  607622 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-977743
	
	I1002 08:11:58.531127  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:58.534411  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.534759  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:58.534796  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.534985  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:11:58.535240  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:58.535415  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:58.535570  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:11:58.535741  607622 main.go:141] libmachine: Using SSH client type: native
	I1002 08:11:58.535943  607622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1002 08:11:58.535959  607622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-977743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-977743/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-977743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:11:58.648581  607622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:11:58.648621  607622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21643-562157/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-562157/.minikube}
	I1002 08:11:58.648682  607622 buildroot.go:174] setting up certificates
	I1002 08:11:58.648697  607622 provision.go:84] configureAuth start
	I1002 08:11:58.648719  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetMachineName
	I1002 08:11:58.649044  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetIP
	I1002 08:11:58.652318  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.652776  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:58.652809  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.653054  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:58.655453  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.655845  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:58.655882  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:58.656077  607622 provision.go:143] copyHostCerts
	I1002 08:11:58.656166  607622 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem, removing ...
	I1002 08:11:58.656194  607622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem
	I1002 08:11:58.656269  607622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/ca.pem (1078 bytes)
	I1002 08:11:58.656390  607622 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem, removing ...
	I1002 08:11:58.656401  607622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem
	I1002 08:11:58.656433  607622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/cert.pem (1123 bytes)
	I1002 08:11:58.656491  607622 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem, removing ...
	I1002 08:11:58.656498  607622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem
	I1002 08:11:58.656522  607622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-562157/.minikube/key.pem (1675 bytes)
	I1002 08:11:58.656572  607622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem org=jenkins.test-preload-977743 san=[127.0.0.1 192.168.39.42 localhost minikube test-preload-977743]
	I1002 08:11:59.377345  607622 provision.go:177] copyRemoteCerts
	I1002 08:11:59.377438  607622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:11:59.377472  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:59.380624  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.380975  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:59.381010  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.381177  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:11:59.381382  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:59.381589  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:11:59.381730  607622 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa Username:docker}
	I1002 08:11:59.464881  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 08:11:59.498379  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 08:11:59.529281  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:11:59.559688  607622 provision.go:87] duration metric: took 910.966733ms to configureAuth
	I1002 08:11:59.559725  607622 buildroot.go:189] setting minikube options for container-runtime
	I1002 08:11:59.559935  607622 config.go:182] Loaded profile config "test-preload-977743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 08:11:59.560050  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:59.563386  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.563779  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:59.563814  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.564062  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:11:59.564300  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:59.564459  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:59.564634  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:11:59.564816  607622 main.go:141] libmachine: Using SSH client type: native
	I1002 08:11:59.565045  607622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1002 08:11:59.565063  607622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:11:59.807258  607622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:11:59.807296  607622 machine.go:96] duration metric: took 1.505681466s to provisionDockerMachine
	I1002 08:11:59.807310  607622 start.go:293] postStartSetup for "test-preload-977743" (driver="kvm2")
	I1002 08:11:59.807321  607622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:11:59.807340  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:11:59.807692  607622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:11:59.807741  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:59.810936  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.811405  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:59.811435  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.811603  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:11:59.811799  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:59.811978  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:11:59.812130  607622 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa Username:docker}
	I1002 08:11:59.894994  607622 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:11:59.900270  607622 info.go:137] Remote host: Buildroot 2025.02
	I1002 08:11:59.900304  607622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/addons for local assets ...
	I1002 08:11:59.900405  607622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-562157/.minikube/files for local assets ...
	I1002 08:11:59.900515  607622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-562157/.minikube/files/etc/ssl/certs/5660802.pem -> 5660802.pem in /etc/ssl/certs
	I1002 08:11:59.900641  607622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:11:59.912544  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/files/etc/ssl/certs/5660802.pem --> /etc/ssl/certs/5660802.pem (1708 bytes)
	I1002 08:11:59.943017  607622 start.go:296] duration metric: took 135.686794ms for postStartSetup
	I1002 08:11:59.943075  607622 fix.go:56] duration metric: took 17.391065139s for fixHost
	I1002 08:11:59.943105  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:11:59.945931  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.946371  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:11:59.946398  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:11:59.946561  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:11:59.946770  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:59.946940  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:11:59.947119  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:11:59.947331  607622 main.go:141] libmachine: Using SSH client type: native
	I1002 08:11:59.947588  607622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1002 08:11:59.947601  607622 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 08:12:00.048198  607622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759392720.011193962
	
	I1002 08:12:00.048233  607622 fix.go:216] guest clock: 1759392720.011193962
	I1002 08:12:00.048244  607622 fix.go:229] Guest: 2025-10-02 08:12:00.011193962 +0000 UTC Remote: 2025-10-02 08:11:59.94308166 +0000 UTC m=+19.858145518 (delta=68.112302ms)
	I1002 08:12:00.048324  607622 fix.go:200] guest clock delta is within tolerance: 68.112302ms
	I1002 08:12:00.048334  607622 start.go:83] releasing machines lock for "test-preload-977743", held for 17.496342285s
	I1002 08:12:00.048373  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:12:00.048706  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetIP
	I1002 08:12:00.051871  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:00.052339  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:12:00.052370  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:00.052586  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:12:00.053160  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:12:00.053334  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:12:00.053428  607622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:12:00.053475  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:12:00.053608  607622 ssh_runner.go:195] Run: cat /version.json
	I1002 08:12:00.053639  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:12:00.056796  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:00.056829  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:00.057324  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:12:00.057356  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:00.057384  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:12:00.057400  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:00.057550  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:12:00.057687  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:12:00.057766  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:12:00.057855  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:12:00.057902  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:12:00.057974  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:12:00.058087  607622 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa Username:docker}
	I1002 08:12:00.058098  607622 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa Username:docker}
	I1002 08:12:00.134804  607622 ssh_runner.go:195] Run: systemctl --version
	I1002 08:12:00.160371  607622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:12:00.315683  607622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:12:00.324274  607622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:12:00.324349  607622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:12:00.345754  607622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 08:12:00.345778  607622 start.go:495] detecting cgroup driver to use...
	I1002 08:12:00.345845  607622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:12:00.364980  607622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:12:00.383739  607622 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:12:00.383807  607622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:12:00.402395  607622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:12:00.420417  607622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:12:00.571760  607622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:12:00.790188  607622 docker.go:234] disabling docker service ...
	I1002 08:12:00.790280  607622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:12:00.810023  607622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:12:00.825697  607622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:12:00.993564  607622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:12:01.138015  607622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:12:01.154003  607622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:12:01.178456  607622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 08:12:01.178533  607622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:12:01.201011  607622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:12:01.201100  607622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:12:01.214853  607622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:12:01.228127  607622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:12:01.241607  607622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:12:01.255856  607622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:12:01.270152  607622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:12:01.291992  607622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:12:01.305749  607622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:12:01.316884  607622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 08:12:01.316952  607622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 08:12:01.338856  607622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:12:01.351480  607622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:12:01.502399  607622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:12:01.626033  607622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:12:01.626130  607622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:12:01.632129  607622 start.go:563] Will wait 60s for crictl version
	I1002 08:12:01.632206  607622 ssh_runner.go:195] Run: which crictl
	I1002 08:12:01.636771  607622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 08:12:01.686035  607622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 08:12:01.686179  607622 ssh_runner.go:195] Run: crio --version
	I1002 08:12:01.716541  607622 ssh_runner.go:195] Run: crio --version
	I1002 08:12:01.749712  607622 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1002 08:12:01.751197  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetIP
	I1002 08:12:01.754718  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:01.755170  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:12:01.755204  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:01.755432  607622 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 08:12:01.760462  607622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:12:01.777435  607622 kubeadm.go:883] updating cluster {Name:test-preload-977743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-977743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:12:01.777550  607622 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 08:12:01.777600  607622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:12:01.817126  607622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1002 08:12:01.817230  607622 ssh_runner.go:195] Run: which lz4
	I1002 08:12:01.821862  607622 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 08:12:01.826983  607622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 08:12:01.827023  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1002 08:12:03.424614  607622 crio.go:462] duration metric: took 1.602795728s to copy over tarball
	I1002 08:12:03.424708  607622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 08:12:05.121745  607622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.697005709s)
	I1002 08:12:05.121788  607622 crio.go:469] duration metric: took 1.697139587s to extract the tarball
	I1002 08:12:05.121797  607622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 08:12:05.164197  607622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:12:05.209358  607622 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:12:05.209388  607622 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:12:05.209395  607622 kubeadm.go:934] updating node { 192.168.39.42 8443 v1.32.0 crio true true} ...
	I1002 08:12:05.209502  607622 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-977743 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-977743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:12:05.209569  607622 ssh_runner.go:195] Run: crio config
	I1002 08:12:05.255870  607622 cni.go:84] Creating CNI manager for ""
	I1002 08:12:05.255895  607622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 08:12:05.255942  607622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:12:05.255964  607622 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.42 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-977743 NodeName:test-preload-977743 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:12:05.256098  607622 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-977743"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.42"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.42"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:12:05.256186  607622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1002 08:12:05.269009  607622 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:12:05.269101  607622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:12:05.281416  607622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1002 08:12:05.303333  607622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:12:05.324522  607622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1002 08:12:05.345631  607622 ssh_runner.go:195] Run: grep 192.168.39.42	control-plane.minikube.internal$ /etc/hosts
	I1002 08:12:05.350284  607622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:12:05.365660  607622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:12:05.516735  607622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:12:05.537073  607622 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743 for IP: 192.168.39.42
	I1002 08:12:05.537107  607622 certs.go:195] generating shared ca certs ...
	I1002 08:12:05.537133  607622 certs.go:227] acquiring lock for ca certs: {Name:mk8e87648e070d331709ecc08a93a441c20cc0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:12:05.537394  607622 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key
	I1002 08:12:05.537472  607622 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key
	I1002 08:12:05.537490  607622 certs.go:257] generating profile certs ...
	I1002 08:12:05.537617  607622 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/client.key
	I1002 08:12:05.537710  607622 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/apiserver.key.de465569
	I1002 08:12:05.537778  607622 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/proxy-client.key
	I1002 08:12:05.537948  607622 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/566080.pem (1338 bytes)
	W1002 08:12:05.538005  607622 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-562157/.minikube/certs/566080_empty.pem, impossibly tiny 0 bytes
	I1002 08:12:05.538020  607622 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:12:05.538049  607622 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/ca.pem (1078 bytes)
	I1002 08:12:05.538085  607622 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:12:05.538116  607622 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/certs/key.pem (1675 bytes)
	I1002 08:12:05.538191  607622 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-562157/.minikube/files/etc/ssl/certs/5660802.pem (1708 bytes)
	I1002 08:12:05.539011  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:12:05.586089  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:12:05.625534  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:12:05.658326  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 08:12:05.688042  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 08:12:05.717743  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 08:12:05.747321  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:12:05.777234  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 08:12:05.806402  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/certs/566080.pem --> /usr/share/ca-certificates/566080.pem (1338 bytes)
	I1002 08:12:05.834692  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/files/etc/ssl/certs/5660802.pem --> /usr/share/ca-certificates/5660802.pem (1708 bytes)
	I1002 08:12:05.864310  607622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:12:05.894439  607622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:12:05.915415  607622 ssh_runner.go:195] Run: openssl version
	I1002 08:12:05.922327  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660802.pem && ln -fs /usr/share/ca-certificates/5660802.pem /etc/ssl/certs/5660802.pem"
	I1002 08:12:05.935789  607622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660802.pem
	I1002 08:12:05.940985  607622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 07:11 /usr/share/ca-certificates/5660802.pem
	I1002 08:12:05.941053  607622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660802.pem
	I1002 08:12:05.948530  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5660802.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:12:05.962203  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:12:05.976242  607622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:12:05.981681  607622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:57 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:12:05.981765  607622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:12:05.989318  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:12:06.002863  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/566080.pem && ln -fs /usr/share/ca-certificates/566080.pem /etc/ssl/certs/566080.pem"
	I1002 08:12:06.016629  607622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/566080.pem
	I1002 08:12:06.022085  607622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 07:11 /usr/share/ca-certificates/566080.pem
	I1002 08:12:06.022175  607622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/566080.pem
	I1002 08:12:06.029530  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/566080.pem /etc/ssl/certs/51391683.0"
	I1002 08:12:06.043167  607622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:12:06.048696  607622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 08:12:06.056285  607622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 08:12:06.063559  607622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 08:12:06.070831  607622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 08:12:06.077711  607622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 08:12:06.084901  607622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 08:12:06.092336  607622 kubeadm.go:400] StartCluster: {Name:test-preload-977743 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-977743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:12:06.092410  607622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:12:06.092460  607622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:12:06.133690  607622 cri.go:89] found id: ""
	I1002 08:12:06.133778  607622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:12:06.146750  607622 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 08:12:06.146772  607622 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 08:12:06.146822  607622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 08:12:06.159417  607622 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 08:12:06.159965  607622 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-977743" does not appear in /home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 08:12:06.160098  607622 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-562157/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-977743" cluster setting kubeconfig missing "test-preload-977743" context setting]
	I1002 08:12:06.160389  607622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/kubeconfig: {Name:mkaba69145ae0ebd7ee7f396e649d41ddd82691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:12:06.160922  607622 kapi.go:59] client config for test-preload-977743: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/client.key", CAFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 08:12:06.161360  607622 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 08:12:06.161375  607622 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 08:12:06.161380  607622 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 08:12:06.161383  607622 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 08:12:06.161387  607622 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 08:12:06.161739  607622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 08:12:06.173154  607622 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.42
	I1002 08:12:06.173185  607622 kubeadm.go:1160] stopping kube-system containers ...
	I1002 08:12:06.173204  607622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 08:12:06.173262  607622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:12:06.212053  607622 cri.go:89] found id: ""
	I1002 08:12:06.212173  607622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 08:12:06.231653  607622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:12:06.243943  607622 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:12:06.243966  607622 kubeadm.go:157] found existing configuration files:
	
	I1002 08:12:06.244037  607622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 08:12:06.255277  607622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:12:06.255337  607622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:12:06.267184  607622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 08:12:06.278808  607622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:12:06.278862  607622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:12:06.290512  607622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 08:12:06.301922  607622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:12:06.301982  607622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:12:06.313854  607622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 08:12:06.324961  607622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:12:06.325022  607622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:12:06.336648  607622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:12:06.348265  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 08:12:06.406305  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 08:12:07.441419  607622 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.035068456s)
	I1002 08:12:07.441507  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 08:12:07.739244  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 08:12:07.808787  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 08:12:07.897992  607622 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:12:07.898082  607622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:12:08.398169  607622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:12:08.898942  607622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:12:09.398328  607622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:12:09.898515  607622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:12:09.939724  607622 api_server.go:72] duration metric: took 2.041743254s to wait for apiserver process to appear ...
	I1002 08:12:09.939750  607622 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:12:09.939769  607622 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1002 08:12:12.702366  607622 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 08:12:12.702409  607622 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 08:12:12.702433  607622 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1002 08:12:12.722331  607622 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 08:12:12.722366  607622 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 08:12:12.940893  607622 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1002 08:12:12.948886  607622 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 08:12:12.948923  607622 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 08:12:13.440634  607622 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1002 08:12:13.452737  607622 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 08:12:13.452771  607622 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 08:12:13.940476  607622 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1002 08:12:13.946656  607622 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 08:12:13.946683  607622 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 08:12:14.440312  607622 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1002 08:12:14.446348  607622 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1002 08:12:14.453706  607622 api_server.go:141] control plane version: v1.32.0
	I1002 08:12:14.453735  607622 api_server.go:131] duration metric: took 4.513979088s to wait for apiserver health ...
	I1002 08:12:14.453745  607622 cni.go:84] Creating CNI manager for ""
	I1002 08:12:14.453751  607622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 08:12:14.455506  607622 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 08:12:14.456802  607622 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 08:12:14.470700  607622 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 08:12:14.493406  607622 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:12:14.499589  607622 system_pods.go:59] 7 kube-system pods found
	I1002 08:12:14.499640  607622 system_pods.go:61] "coredns-668d6bf9bc-fjqkc" [8169d3be-577c-440c-8517-782be0c0a2dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:12:14.499649  607622 system_pods.go:61] "etcd-test-preload-977743" [a47b0d96-7f6c-462e-a061-073b0aee835b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:12:14.499657  607622 system_pods.go:61] "kube-apiserver-test-preload-977743" [cbd57ddd-85fa-4801-973e-6e19b815ecaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:12:14.499704  607622 system_pods.go:61] "kube-controller-manager-test-preload-977743" [17abfe29-b7a1-4bf6-8c83-57aaa53b8e3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:12:14.499709  607622 system_pods.go:61] "kube-proxy-r8fdj" [336c1654-6028-4d75-a986-9037ad249142] Running
	I1002 08:12:14.499718  607622 system_pods.go:61] "kube-scheduler-test-preload-977743" [4a1efb9b-84ee-4e7a-8e1b-a4e41e71c727] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:12:14.499725  607622 system_pods.go:61] "storage-provisioner" [106e56a0-5e14-4fe1-9a5b-4ecc1c61e680] Running
	I1002 08:12:14.499730  607622 system_pods.go:74] duration metric: took 6.299417ms to wait for pod list to return data ...
	I1002 08:12:14.499737  607622 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:12:14.505088  607622 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 08:12:14.505115  607622 node_conditions.go:123] node cpu capacity is 2
	I1002 08:12:14.505128  607622 node_conditions.go:105] duration metric: took 5.386419ms to run NodePressure ...
	I1002 08:12:14.505203  607622 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 08:12:14.774118  607622 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 08:12:14.778994  607622 kubeadm.go:743] kubelet initialised
	I1002 08:12:14.779024  607622 kubeadm.go:744] duration metric: took 4.860176ms waiting for restarted kubelet to initialise ...
	I1002 08:12:14.779042  607622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:12:14.801781  607622 ops.go:34] apiserver oom_adj: -16
	I1002 08:12:14.801806  607622 kubeadm.go:601] duration metric: took 8.655027787s to restartPrimaryControlPlane
	I1002 08:12:14.801820  607622 kubeadm.go:402] duration metric: took 8.709491983s to StartCluster
	I1002 08:12:14.801845  607622 settings.go:142] acquiring lock: {Name:mkde88de9cc28e670cb4891970fce50579712197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:12:14.801931  607622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 08:12:14.802665  607622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-562157/kubeconfig: {Name:mkaba69145ae0ebd7ee7f396e649d41ddd82691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:12:14.802946  607622 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:12:14.803047  607622 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:12:14.803201  607622 config.go:182] Loaded profile config "test-preload-977743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 08:12:14.803235  607622 addons.go:69] Setting default-storageclass=true in profile "test-preload-977743"
	I1002 08:12:14.803197  607622 addons.go:69] Setting storage-provisioner=true in profile "test-preload-977743"
	I1002 08:12:14.803269  607622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-977743"
	I1002 08:12:14.803282  607622 addons.go:238] Setting addon storage-provisioner=true in "test-preload-977743"
	W1002 08:12:14.803296  607622 addons.go:247] addon storage-provisioner should already be in state true
	I1002 08:12:14.803324  607622 host.go:66] Checking if "test-preload-977743" exists ...
	I1002 08:12:14.803626  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:12:14.803671  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:12:14.803751  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:12:14.803816  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:12:14.804763  607622 out.go:179] * Verifying Kubernetes components...
	I1002 08:12:14.806310  607622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:12:14.818424  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I1002 08:12:14.818501  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I1002 08:12:14.818952  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:12:14.818956  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:12:14.819525  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:12:14.819543  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:12:14.819677  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:12:14.819707  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:12:14.819987  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:12:14.820081  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:12:14.820315  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetState
	I1002 08:12:14.820562  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:12:14.820608  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:12:14.822930  607622 kapi.go:59] client config for test-preload-977743: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/client.key", CAFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 08:12:14.823346  607622 addons.go:238] Setting addon default-storageclass=true in "test-preload-977743"
	W1002 08:12:14.823372  607622 addons.go:247] addon default-storageclass should already be in state true
	I1002 08:12:14.823407  607622 host.go:66] Checking if "test-preload-977743" exists ...
	I1002 08:12:14.823718  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:12:14.823802  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:12:14.834958  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I1002 08:12:14.835650  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:12:14.836272  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:12:14.836301  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:12:14.836696  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:12:14.836913  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetState
	I1002 08:12:14.837482  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I1002 08:12:14.837990  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:12:14.838493  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:12:14.838519  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:12:14.838909  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:12:14.839191  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:12:14.839639  607622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:12:14.839702  607622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:12:14.841282  607622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:12:14.842756  607622 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:12:14.842777  607622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:12:14.842798  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:12:14.846150  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:14.846705  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:12:14.846732  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:14.846948  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:12:14.847190  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:12:14.847396  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:12:14.847592  607622 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa Username:docker}
	I1002 08:12:14.854668  607622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I1002 08:12:14.855175  607622 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:12:14.855640  607622 main.go:141] libmachine: Using API Version  1
	I1002 08:12:14.855655  607622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:12:14.856047  607622 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:12:14.856287  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetState
	I1002 08:12:14.858528  607622 main.go:141] libmachine: (test-preload-977743) Calling .DriverName
	I1002 08:12:14.858825  607622 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:12:14.858841  607622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:12:14.858860  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHHostname
	I1002 08:12:14.862251  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:14.862795  607622 main.go:141] libmachine: (test-preload-977743) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:d2:4a", ip: ""} in network mk-test-preload-977743: {Iface:virbr1 ExpiryTime:2025-10-02 09:11:54 +0000 UTC Type:0 Mac:52:54:00:ba:d2:4a Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-977743 Clientid:01:52:54:00:ba:d2:4a}
	I1002 08:12:14.862832  607622 main.go:141] libmachine: (test-preload-977743) DBG | domain test-preload-977743 has defined IP address 192.168.39.42 and MAC address 52:54:00:ba:d2:4a in network mk-test-preload-977743
	I1002 08:12:14.862982  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHPort
	I1002 08:12:14.863199  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHKeyPath
	I1002 08:12:14.863379  607622 main.go:141] libmachine: (test-preload-977743) Calling .GetSSHUsername
	I1002 08:12:14.863539  607622 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/test-preload-977743/id_rsa Username:docker}
	I1002 08:12:15.041395  607622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:12:15.062334  607622 node_ready.go:35] waiting up to 6m0s for node "test-preload-977743" to be "Ready" ...
	I1002 08:12:15.066127  607622 node_ready.go:49] node "test-preload-977743" is "Ready"
	I1002 08:12:15.066186  607622 node_ready.go:38] duration metric: took 3.792663ms for node "test-preload-977743" to be "Ready" ...
	I1002 08:12:15.066207  607622 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:12:15.066279  607622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:12:15.089512  607622 api_server.go:72] duration metric: took 286.525338ms to wait for apiserver process to appear ...
	I1002 08:12:15.089549  607622 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:12:15.089575  607622 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1002 08:12:15.096085  607622 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1002 08:12:15.097208  607622 api_server.go:141] control plane version: v1.32.0
	I1002 08:12:15.097242  607622 api_server.go:131] duration metric: took 7.683514ms to wait for apiserver health ...
	I1002 08:12:15.097254  607622 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:12:15.101064  607622 system_pods.go:59] 7 kube-system pods found
	I1002 08:12:15.101101  607622 system_pods.go:61] "coredns-668d6bf9bc-fjqkc" [8169d3be-577c-440c-8517-782be0c0a2dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:12:15.101110  607622 system_pods.go:61] "etcd-test-preload-977743" [a47b0d96-7f6c-462e-a061-073b0aee835b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:12:15.101121  607622 system_pods.go:61] "kube-apiserver-test-preload-977743" [cbd57ddd-85fa-4801-973e-6e19b815ecaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:12:15.101129  607622 system_pods.go:61] "kube-controller-manager-test-preload-977743" [17abfe29-b7a1-4bf6-8c83-57aaa53b8e3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:12:15.101145  607622 system_pods.go:61] "kube-proxy-r8fdj" [336c1654-6028-4d75-a986-9037ad249142] Running
	I1002 08:12:15.101155  607622 system_pods.go:61] "kube-scheduler-test-preload-977743" [4a1efb9b-84ee-4e7a-8e1b-a4e41e71c727] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:12:15.101163  607622 system_pods.go:61] "storage-provisioner" [106e56a0-5e14-4fe1-9a5b-4ecc1c61e680] Running
	I1002 08:12:15.101172  607622 system_pods.go:74] duration metric: took 3.909539ms to wait for pod list to return data ...
	I1002 08:12:15.101182  607622 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:12:15.103494  607622 default_sa.go:45] found service account: "default"
	I1002 08:12:15.103517  607622 default_sa.go:55] duration metric: took 2.323915ms for default service account to be created ...
	I1002 08:12:15.103526  607622 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:12:15.107052  607622 system_pods.go:86] 7 kube-system pods found
	I1002 08:12:15.107085  607622 system_pods.go:89] "coredns-668d6bf9bc-fjqkc" [8169d3be-577c-440c-8517-782be0c0a2dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:12:15.107094  607622 system_pods.go:89] "etcd-test-preload-977743" [a47b0d96-7f6c-462e-a061-073b0aee835b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:12:15.107108  607622 system_pods.go:89] "kube-apiserver-test-preload-977743" [cbd57ddd-85fa-4801-973e-6e19b815ecaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:12:15.107114  607622 system_pods.go:89] "kube-controller-manager-test-preload-977743" [17abfe29-b7a1-4bf6-8c83-57aaa53b8e3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:12:15.107117  607622 system_pods.go:89] "kube-proxy-r8fdj" [336c1654-6028-4d75-a986-9037ad249142] Running
	I1002 08:12:15.107123  607622 system_pods.go:89] "kube-scheduler-test-preload-977743" [4a1efb9b-84ee-4e7a-8e1b-a4e41e71c727] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:12:15.107129  607622 system_pods.go:89] "storage-provisioner" [106e56a0-5e14-4fe1-9a5b-4ecc1c61e680] Running
	I1002 08:12:15.107148  607622 system_pods.go:126] duration metric: took 3.604139ms to wait for k8s-apps to be running ...
	I1002 08:12:15.107162  607622 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:12:15.107217  607622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:12:15.138272  607622 system_svc.go:56] duration metric: took 31.09532ms WaitForService to wait for kubelet
	I1002 08:12:15.138320  607622 kubeadm.go:586] duration metric: took 335.339ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:12:15.138346  607622 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:12:15.145293  607622 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 08:12:15.145319  607622 node_conditions.go:123] node cpu capacity is 2
	I1002 08:12:15.145331  607622 node_conditions.go:105] duration metric: took 6.977946ms to run NodePressure ...
	I1002 08:12:15.145342  607622 start.go:241] waiting for startup goroutines ...
	I1002 08:12:15.147114  607622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:12:15.160859  607622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:12:15.817522  607622 main.go:141] libmachine: Making call to close driver server
	I1002 08:12:15.817553  607622 main.go:141] libmachine: (test-preload-977743) Calling .Close
	I1002 08:12:15.817556  607622 main.go:141] libmachine: Making call to close driver server
	I1002 08:12:15.817577  607622 main.go:141] libmachine: (test-preload-977743) Calling .Close
	I1002 08:12:15.817891  607622 main.go:141] libmachine: Successfully made call to close driver server
	I1002 08:12:15.817910  607622 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 08:12:15.817920  607622 main.go:141] libmachine: Making call to close driver server
	I1002 08:12:15.817939  607622 main.go:141] libmachine: (test-preload-977743) DBG | Closing plugin on server side
	I1002 08:12:15.817963  607622 main.go:141] libmachine: (test-preload-977743) DBG | Closing plugin on server side
	I1002 08:12:15.817964  607622 main.go:141] libmachine: Successfully made call to close driver server
	I1002 08:12:15.817979  607622 main.go:141] libmachine: (test-preload-977743) Calling .Close
	I1002 08:12:15.817984  607622 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 08:12:15.817993  607622 main.go:141] libmachine: Making call to close driver server
	I1002 08:12:15.818004  607622 main.go:141] libmachine: (test-preload-977743) Calling .Close
	I1002 08:12:15.818357  607622 main.go:141] libmachine: Successfully made call to close driver server
	I1002 08:12:15.818418  607622 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 08:12:15.818423  607622 main.go:141] libmachine: (test-preload-977743) DBG | Closing plugin on server side
	I1002 08:12:15.818417  607622 main.go:141] libmachine: (test-preload-977743) DBG | Closing plugin on server side
	I1002 08:12:15.818565  607622 main.go:141] libmachine: Successfully made call to close driver server
	I1002 08:12:15.818622  607622 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 08:12:15.826246  607622 main.go:141] libmachine: Making call to close driver server
	I1002 08:12:15.826261  607622 main.go:141] libmachine: (test-preload-977743) Calling .Close
	I1002 08:12:15.826504  607622 main.go:141] libmachine: Successfully made call to close driver server
	I1002 08:12:15.826521  607622 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 08:12:15.828264  607622 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 08:12:15.829552  607622 addons.go:514] duration metric: took 1.02652204s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 08:12:15.829875  607622 start.go:246] waiting for cluster config update ...
	I1002 08:12:15.829896  607622 start.go:255] writing updated cluster config ...
	I1002 08:12:15.830324  607622 ssh_runner.go:195] Run: rm -f paused
	I1002 08:12:15.837078  607622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:12:15.837523  607622 kapi.go:59] client config for test-preload-977743: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/profiles/test-preload-977743/client.key", CAFile:"/home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 08:12:15.840290  607622 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-fjqkc" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:12:17.847075  607622 pod_ready.go:104] pod "coredns-668d6bf9bc-fjqkc" is not "Ready", error: <nil>
	I1002 08:12:18.346667  607622 pod_ready.go:94] pod "coredns-668d6bf9bc-fjqkc" is "Ready"
	I1002 08:12:18.346707  607622 pod_ready.go:86] duration metric: took 2.506396493s for pod "coredns-668d6bf9bc-fjqkc" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:18.349319  607622 pod_ready.go:83] waiting for pod "etcd-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:12:20.354986  607622 pod_ready.go:104] pod "etcd-test-preload-977743" is not "Ready", error: <nil>
	W1002 08:12:22.356225  607622 pod_ready.go:104] pod "etcd-test-preload-977743" is not "Ready", error: <nil>
	I1002 08:12:23.855596  607622 pod_ready.go:94] pod "etcd-test-preload-977743" is "Ready"
	I1002 08:12:23.855634  607622 pod_ready.go:86] duration metric: took 5.506293971s for pod "etcd-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:23.857426  607622 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:23.862039  607622 pod_ready.go:94] pod "kube-apiserver-test-preload-977743" is "Ready"
	I1002 08:12:23.862065  607622 pod_ready.go:86] duration metric: took 4.619128ms for pod "kube-apiserver-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:23.864784  607622 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:12:25.871702  607622 pod_ready.go:104] pod "kube-controller-manager-test-preload-977743" is not "Ready", error: <nil>
	I1002 08:12:26.873251  607622 pod_ready.go:94] pod "kube-controller-manager-test-preload-977743" is "Ready"
	I1002 08:12:26.873290  607622 pod_ready.go:86] duration metric: took 3.008481989s for pod "kube-controller-manager-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:26.875871  607622 pod_ready.go:83] waiting for pod "kube-proxy-r8fdj" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:26.881791  607622 pod_ready.go:94] pod "kube-proxy-r8fdj" is "Ready"
	I1002 08:12:26.881813  607622 pod_ready.go:86] duration metric: took 5.914756ms for pod "kube-proxy-r8fdj" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:26.884063  607622 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:27.890164  607622 pod_ready.go:94] pod "kube-scheduler-test-preload-977743" is "Ready"
	I1002 08:12:27.890200  607622 pod_ready.go:86] duration metric: took 1.006116644s for pod "kube-scheduler-test-preload-977743" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:12:27.890213  607622 pod_ready.go:40] duration metric: took 12.053099132s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:12:27.936440  607622 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1002 08:12:27.938284  607622 out.go:203] 
	W1002 08:12:27.939690  607622 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1002 08:12:27.940937  607622 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1002 08:12:27.942587  607622 out.go:179] * Done! kubectl is now configured to use "test-preload-977743" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.916032707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e23d6167-8c8e-4440-b149-8adfac768297 name=/runtime.v1.RuntimeService/Version
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.918252863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ece80641-57e1-403a-8eb6-b3ce6ff9e088 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.919477143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759392748919450066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ece80641-57e1-403a-8eb6-b3ce6ff9e088 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.920042758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d19559a-a302-4488-ac6e-13b24b1c4939 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.920218637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d19559a-a302-4488-ac6e-13b24b1c4939 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.920629187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7427219a719e4822f2c69619f507ed8efba92191a0565e4fbad9e7edf935b35c,PodSandboxId:1af648f26311648ec5de377e315a0669091e17a6c1ec6dfc7f285acab0babded,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759392736907169507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fjqkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8169d3be-577c-440c-8517-782be0c0a2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79280ace516f6a168b3d058dd20448ab4c58d627294a5bad894242f91d00bbe,PodSandboxId:e61c705ca3312da061459539c5b466bd8de166e2dac5066477ac45509f7975f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759392733270464051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8fdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 336c1654-6028-4d75-a986-9037ad249142,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096b221981be0110e4342d4083983ebc2ab772eef90889f13d2c70cd02b8829f,PodSandboxId:f9f3af0c39d2e9885794bc163d6b83603361086852c45a93a232722c20ca062a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759392733271633719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10
6e56a0-5e14-4fe1-9a5b-4ecc1c61e680,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29157e867cc202f1dc0ac6c34b7b591b8c9fbc1d828e0e38d5c2415522ad9b98,PodSandboxId:0c1e87adc992488431368001bb7a904ac58aae905b83db84784df50f5a755af1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759392729455061452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 70a7ad6e3c4fa829ae1c0d7ca04a8e7b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431b28405feb02a989506a2b5f9c85aebe61518af0d080b545b37f7ec87b9f0,PodSandboxId:1e9b3480157f4bba774f29dddd6b902c0e490fa41b908ea61bfe779c991bcf39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759392729452400490,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b1e158759d61574920158d19df580578,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48f5bbc3e9c99a70843509fb545237a23466df9affb08420b0b679aa730769,PodSandboxId:da0f70b7b28af82d0ce683023f7a9e64db55fbc2ba4e404aa3ee479a1a15fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759392729432308590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035cfcf3080f261158506f6d67b23b20,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6106c243c74192a0141b42eb11525300e485b84629b0fe8f3dc82315dcd9b19c,PodSandboxId:547d7f3b260ae5ea40e4b75f7c9b006a79f99d976ba38abb3ae8220c4918c1ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759392729407043597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2daf6eb763e488a6d74236c2456a1c7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d19559a-a302-4488-ac6e-13b24b1c4939 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.961768178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2167249-f446-45c7-978b-4ef84d84a00c name=/runtime.v1.RuntimeService/Version
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.961836047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2167249-f446-45c7-978b-4ef84d84a00c name=/runtime.v1.RuntimeService/Version
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.967609305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df2fecd6-173c-4a42-8fbc-5f863e440c14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.968482069Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759392748968457074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df2fecd6-173c-4a42-8fbc-5f863e440c14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.969138338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb64c24d-2106-41d3-990a-82b44382c1ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.969231904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb64c24d-2106-41d3-990a-82b44382c1ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:28 test-preload-977743 crio[837]: time="2025-10-02 08:12:28.969446839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7427219a719e4822f2c69619f507ed8efba92191a0565e4fbad9e7edf935b35c,PodSandboxId:1af648f26311648ec5de377e315a0669091e17a6c1ec6dfc7f285acab0babded,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759392736907169507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fjqkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8169d3be-577c-440c-8517-782be0c0a2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79280ace516f6a168b3d058dd20448ab4c58d627294a5bad894242f91d00bbe,PodSandboxId:e61c705ca3312da061459539c5b466bd8de166e2dac5066477ac45509f7975f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759392733270464051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8fdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 336c1654-6028-4d75-a986-9037ad249142,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096b221981be0110e4342d4083983ebc2ab772eef90889f13d2c70cd02b8829f,PodSandboxId:f9f3af0c39d2e9885794bc163d6b83603361086852c45a93a232722c20ca062a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759392733271633719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10
6e56a0-5e14-4fe1-9a5b-4ecc1c61e680,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29157e867cc202f1dc0ac6c34b7b591b8c9fbc1d828e0e38d5c2415522ad9b98,PodSandboxId:0c1e87adc992488431368001bb7a904ac58aae905b83db84784df50f5a755af1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759392729455061452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 70a7ad6e3c4fa829ae1c0d7ca04a8e7b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431b28405feb02a989506a2b5f9c85aebe61518af0d080b545b37f7ec87b9f0,PodSandboxId:1e9b3480157f4bba774f29dddd6b902c0e490fa41b908ea61bfe779c991bcf39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759392729452400490,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b1e158759d61574920158d19df580578,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48f5bbc3e9c99a70843509fb545237a23466df9affb08420b0b679aa730769,PodSandboxId:da0f70b7b28af82d0ce683023f7a9e64db55fbc2ba4e404aa3ee479a1a15fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759392729432308590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035cfcf3080f261158506f6d67b23b20,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6106c243c74192a0141b42eb11525300e485b84629b0fe8f3dc82315dcd9b19c,PodSandboxId:547d7f3b260ae5ea40e4b75f7c9b006a79f99d976ba38abb3ae8220c4918c1ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759392729407043597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2daf6eb763e488a6d74236c2456a1c7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb64c24d-2106-41d3-990a-82b44382c1ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.006173942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41b57582-fd44-4e65-98bf-85d2db1a0f10 name=/runtime.v1.RuntimeService/Version
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.006721943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41b57582-fd44-4e65-98bf-85d2db1a0f10 name=/runtime.v1.RuntimeService/Version
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.006659710Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=74e048bc-11ef-418a-b9de-2e22d0de3289 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.007073811Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1af648f26311648ec5de377e315a0669091e17a6c1ec6dfc7f285acab0babded,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-fjqkc,Uid:8169d3be-577c-440c-8517-782be0c0a2dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759392736675411924,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-fjqkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8169d3be-577c-440c-8517-782be0c0a2dc,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-02T08:12:12.819020906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e61c705ca3312da061459539c5b466bd8de166e2dac5066477ac45509f7975f0,Metadata:&PodSandboxMetadata{Name:kube-proxy-r8fdj,Uid:336c1654-6028-4d75-a986-9037ad249142,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1759392733135872946,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-r8fdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c1654-6028-4d75-a986-9037ad249142,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-02T08:12:12.819028860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9f3af0c39d2e9885794bc163d6b83603361086852c45a93a232722c20ca062a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:106e56a0-5e14-4fe1-9a5b-4ecc1c61e680,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759392733135281388,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e56a0-5e14-4fe1-9a5b-4ecc
1c61e680,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-02T08:12:12.819031193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0c1e87adc992488431368001bb7a904ac58aae905b83db84784df50f5a755af1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-977743,Ui
d:70a7ad6e3c4fa829ae1c0d7ca04a8e7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759392729176073309,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70a7ad6e3c4fa829ae1c0d7ca04a8e7b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 70a7ad6e3c4fa829ae1c0d7ca04a8e7b,kubernetes.io/config.seen: 2025-10-02T08:12:07.823027806Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e9b3480157f4bba774f29dddd6b902c0e490fa41b908ea61bfe779c991bcf39,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-977743,Uid:b1e158759d61574920158d19df580578,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759392729170001555,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-977743,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e158759d61574920158d19df580578,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b1e158759d61574920158d19df580578,kubernetes.io/config.seen: 2025-10-02T08:12:07.823029207Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:da0f70b7b28af82d0ce683023f7a9e64db55fbc2ba4e404aa3ee479a1a15fcdb,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-977743,Uid:035cfcf3080f261158506f6d67b23b20,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759392729168409362,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035cfcf3080f261158506f6d67b23b20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.42:2379,kubernetes.io/config.hash: 035cfcf3080f261158506f6d67b23b20,kubernetes.io/config.seen: 2025-10-02T08:
12:07.888502376Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:547d7f3b260ae5ea40e4b75f7c9b006a79f99d976ba38abb3ae8220c4918c1ce,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-977743,Uid:e2daf6eb763e488a6d74236c2456a1c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759392729162474635,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2daf6eb763e488a6d74236c2456a1c7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.42:8443,kubernetes.io/config.hash: e2daf6eb763e488a6d74236c2456a1c7,kubernetes.io/config.seen: 2025-10-02T08:12:07.823023829Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=74e048bc-11ef-418a-b9de-2e22d0de3289 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.007713697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57a93d6a-8c31-4eff-8aa8-2622e379e81d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.008146565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759392749008078796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57a93d6a-8c31-4eff-8aa8-2622e379e81d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.008729520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7225b65-f479-47f5-8dd5-d7fa6c6f182b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.008806176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7225b65-f479-47f5-8dd5-d7fa6c6f182b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.008856331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ea41f8a-e980-4665-9c8a-ef3add018744 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.008920985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ea41f8a-e980-4665-9c8a-ef3add018744 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.009022538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7427219a719e4822f2c69619f507ed8efba92191a0565e4fbad9e7edf935b35c,PodSandboxId:1af648f26311648ec5de377e315a0669091e17a6c1ec6dfc7f285acab0babded,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759392736907169507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fjqkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8169d3be-577c-440c-8517-782be0c0a2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79280ace516f6a168b3d058dd20448ab4c58d627294a5bad894242f91d00bbe,PodSandboxId:e61c705ca3312da061459539c5b466bd8de166e2dac5066477ac45509f7975f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759392733270464051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8fdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 336c1654-6028-4d75-a986-9037ad249142,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096b221981be0110e4342d4083983ebc2ab772eef90889f13d2c70cd02b8829f,PodSandboxId:f9f3af0c39d2e9885794bc163d6b83603361086852c45a93a232722c20ca062a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759392733271633719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10
6e56a0-5e14-4fe1-9a5b-4ecc1c61e680,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29157e867cc202f1dc0ac6c34b7b591b8c9fbc1d828e0e38d5c2415522ad9b98,PodSandboxId:0c1e87adc992488431368001bb7a904ac58aae905b83db84784df50f5a755af1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759392729455061452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 70a7ad6e3c4fa829ae1c0d7ca04a8e7b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431b28405feb02a989506a2b5f9c85aebe61518af0d080b545b37f7ec87b9f0,PodSandboxId:1e9b3480157f4bba774f29dddd6b902c0e490fa41b908ea61bfe779c991bcf39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759392729452400490,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b1e158759d61574920158d19df580578,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48f5bbc3e9c99a70843509fb545237a23466df9affb08420b0b679aa730769,PodSandboxId:da0f70b7b28af82d0ce683023f7a9e64db55fbc2ba4e404aa3ee479a1a15fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759392729432308590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035cfcf3080f261158506f6d67b23b20,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6106c243c74192a0141b42eb11525300e485b84629b0fe8f3dc82315dcd9b19c,PodSandboxId:547d7f3b260ae5ea40e4b75f7c9b006a79f99d976ba38abb3ae8220c4918c1ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759392729407043597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2daf6eb763e488a6d74236c2456a1c7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7225b65-f479-47f5-8dd5-d7fa6c6f182b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 08:12:29 test-preload-977743 crio[837]: time="2025-10-02 08:12:29.009151768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7427219a719e4822f2c69619f507ed8efba92191a0565e4fbad9e7edf935b35c,PodSandboxId:1af648f26311648ec5de377e315a0669091e17a6c1ec6dfc7f285acab0babded,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759392736907169507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fjqkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8169d3be-577c-440c-8517-782be0c0a2dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79280ace516f6a168b3d058dd20448ab4c58d627294a5bad894242f91d00bbe,PodSandboxId:e61c705ca3312da061459539c5b466bd8de166e2dac5066477ac45509f7975f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759392733270464051,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r8fdj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 336c1654-6028-4d75-a986-9037ad249142,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:096b221981be0110e4342d4083983ebc2ab772eef90889f13d2c70cd02b8829f,PodSandboxId:f9f3af0c39d2e9885794bc163d6b83603361086852c45a93a232722c20ca062a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759392733271633719,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10
6e56a0-5e14-4fe1-9a5b-4ecc1c61e680,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29157e867cc202f1dc0ac6c34b7b591b8c9fbc1d828e0e38d5c2415522ad9b98,PodSandboxId:0c1e87adc992488431368001bb7a904ac58aae905b83db84784df50f5a755af1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759392729455061452,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 70a7ad6e3c4fa829ae1c0d7ca04a8e7b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431b28405feb02a989506a2b5f9c85aebe61518af0d080b545b37f7ec87b9f0,PodSandboxId:1e9b3480157f4bba774f29dddd6b902c0e490fa41b908ea61bfe779c991bcf39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759392729452400490,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b1e158759d61574920158d19df580578,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48f5bbc3e9c99a70843509fb545237a23466df9affb08420b0b679aa730769,PodSandboxId:da0f70b7b28af82d0ce683023f7a9e64db55fbc2ba4e404aa3ee479a1a15fcdb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759392729432308590,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035cfcf3080f261158506f6d67b23b20,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6106c243c74192a0141b42eb11525300e485b84629b0fe8f3dc82315dcd9b19c,PodSandboxId:547d7f3b260ae5ea40e4b75f7c9b006a79f99d976ba38abb3ae8220c4918c1ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759392729407043597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-977743,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2daf6eb763e488a6d74236c2456a1c7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ea41f8a-e980-4665-9c8a-ef3add018744 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7427219a719e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   1af648f263116       coredns-668d6bf9bc-fjqkc
	096b221981be0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   f9f3af0c39d2e       storage-provisioner
	d79280ace516f       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   e61c705ca3312       kube-proxy-r8fdj
	29157e867cc20       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   0c1e87adc9924       kube-controller-manager-test-preload-977743
	4431b28405feb       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   1e9b3480157f4       kube-scheduler-test-preload-977743
	1f48f5bbc3e9c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   da0f70b7b28af       etcd-test-preload-977743
	6106c243c7419       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   547d7f3b260ae       kube-apiserver-test-preload-977743
	
	
	==> coredns [7427219a719e4822f2c69619f507ed8efba92191a0565e4fbad9e7edf935b35c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41467 - 50846 "HINFO IN 4344095822503975275.6263881589979258443. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026448515s
	
	
	==> describe nodes <==
	Name:               test-preload-977743
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-977743
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=test-preload-977743
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_10_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:10:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-977743
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:12:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:12:14 +0000   Thu, 02 Oct 2025 08:10:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:12:14 +0000   Thu, 02 Oct 2025 08:10:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:12:14 +0000   Thu, 02 Oct 2025 08:10:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:12:14 +0000   Thu, 02 Oct 2025 08:12:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    test-preload-977743
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d2e4402d8014631b0afc3c2bb43586b
	  System UUID:                4d2e4402-d801-4631-b0af-c3c2bb43586b
	  Boot ID:                    b369ffdc-6cb3-457b-8546-82f9547fbdb7
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-fjqkc                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     104s
	  kube-system                 etcd-test-preload-977743                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         109s
	  kube-system                 kube-apiserver-test-preload-977743             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-test-preload-977743    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-r8fdj                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-test-preload-977743             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 103s               kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  109s               kubelet          Node test-preload-977743 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  109s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    109s               kubelet          Node test-preload-977743 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s               kubelet          Node test-preload-977743 status is now: NodeHasSufficientPID
	  Normal   Starting                 109s               kubelet          Starting kubelet.
	  Normal   NodeReady                108s               kubelet          Node test-preload-977743 status is now: NodeReady
	  Normal   RegisteredNode           105s               node-controller  Node test-preload-977743 event: Registered Node test-preload-977743 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-977743 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-977743 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-977743 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                kubelet          Node test-preload-977743 has been rebooted, boot id: b369ffdc-6cb3-457b-8546-82f9547fbdb7
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-977743 event: Registered Node test-preload-977743 in Controller
	
	
	==> dmesg <==
	[Oct 2 08:11] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000054] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003299] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.996164] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 2 08:12] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.097380] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.500275] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.030331] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [1f48f5bbc3e9c99a70843509fb545237a23466df9affb08420b0b679aa730769] <==
	{"level":"info","ts":"2025-10-02T08:12:09.912402Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"beed476d98f529f8","local-member-id":"be5e8f7004ae306c","added-peer-id":"be5e8f7004ae306c","added-peer-peer-urls":["https://192.168.39.42:2380"]}
	{"level":"info","ts":"2025-10-02T08:12:09.912495Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"beed476d98f529f8","local-member-id":"be5e8f7004ae306c","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T08:12:09.912517Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T08:12:09.924428Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T08:12:09.947447Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T08:12:09.947728Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"be5e8f7004ae306c","initial-advertise-peer-urls":["https://192.168.39.42:2380"],"listen-peer-urls":["https://192.168.39.42:2380"],"advertise-client-urls":["https://192.168.39.42:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.42:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T08:12:09.947783Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T08:12:09.947896Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.42:2380"}
	{"level":"info","ts":"2025-10-02T08:12:09.947921Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.42:2380"}
	{"level":"info","ts":"2025-10-02T08:12:11.560219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T08:12:11.560261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T08:12:11.560301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c received MsgPreVoteResp from be5e8f7004ae306c at term 2"}
	{"level":"info","ts":"2025-10-02T08:12:11.560317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T08:12:11.560326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c received MsgVoteResp from be5e8f7004ae306c at term 3"}
	{"level":"info","ts":"2025-10-02T08:12:11.560344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c became leader at term 3"}
	{"level":"info","ts":"2025-10-02T08:12:11.560350Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be5e8f7004ae306c elected leader be5e8f7004ae306c at term 3"}
	{"level":"info","ts":"2025-10-02T08:12:11.565473Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"be5e8f7004ae306c","local-member-attributes":"{Name:test-preload-977743 ClientURLs:[https://192.168.39.42:2379]}","request-path":"/0/members/be5e8f7004ae306c/attributes","cluster-id":"beed476d98f529f8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T08:12:11.565590Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T08:12:11.565901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T08:12:11.565943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T08:12:11.565645Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T08:12:11.566987Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T08:12:11.567012Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T08:12:11.567731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-02T08:12:11.568199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.42:2379"}
	
	
	==> kernel <==
	 08:12:29 up 0 min,  0 users,  load average: 1.00, 0.27, 0.09
	Linux test-preload-977743 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6106c243c74192a0141b42eb11525300e485b84629b0fe8f3dc82315dcd9b19c] <==
	I1002 08:12:12.754498       1 shared_informer.go:320] Caches are synced for configmaps
	I1002 08:12:12.756556       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1002 08:12:12.767751       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1002 08:12:12.767785       1 policy_source.go:240] refreshing policies
	I1002 08:12:12.773504       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:12:12.773683       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:12:12.780857       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 08:12:12.781599       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1002 08:12:12.785349       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:12:12.785896       1 aggregator.go:171] initial CRD sync complete...
	I1002 08:12:12.785934       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 08:12:12.785951       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 08:12:12.785966       1 cache.go:39] Caches are synced for autoregister controller
	E1002 08:12:12.808436       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:12:12.848804       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 08:12:12.848840       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 08:12:12.915814       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1002 08:12:13.657652       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:12:14.600880       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1002 08:12:14.648903       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1002 08:12:14.675457       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:12:14.682164       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:12:16.056991       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1002 08:12:16.252987       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:12:16.355509       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [29157e867cc202f1dc0ac6c34b7b591b8c9fbc1d828e0e38d5c2415522ad9b98] <==
	I1002 08:12:15.984563       1 shared_informer.go:320] Caches are synced for garbage collector
	I1002 08:12:15.984614       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:12:15.984633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 08:12:15.989691       1 shared_informer.go:320] Caches are synced for cronjob
	I1002 08:12:15.991089       1 shared_informer.go:320] Caches are synced for garbage collector
	I1002 08:12:15.996560       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1002 08:12:15.996653       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1002 08:12:15.997720       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1002 08:12:15.997852       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1002 08:12:16.001139       1 shared_informer.go:320] Caches are synced for persistent volume
	I1002 08:12:16.001200       1 shared_informer.go:320] Caches are synced for PVC protection
	I1002 08:12:16.001212       1 shared_informer.go:320] Caches are synced for endpoint
	I1002 08:12:16.001530       1 shared_informer.go:320] Caches are synced for TTL
	I1002 08:12:16.001221       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1002 08:12:16.001859       1 shared_informer.go:320] Caches are synced for PV protection
	I1002 08:12:16.007017       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1002 08:12:16.011446       1 shared_informer.go:320] Caches are synced for attach detach
	I1002 08:12:16.012873       1 shared_informer.go:320] Caches are synced for job
	I1002 08:12:16.014132       1 shared_informer.go:320] Caches are synced for resource quota
	I1002 08:12:16.017518       1 shared_informer.go:320] Caches are synced for disruption
	I1002 08:12:16.065041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="136.126074ms"
	I1002 08:12:16.065330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="105.312µs"
	I1002 08:12:17.989216       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="122.253µs"
	I1002 08:12:18.029323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.081685ms"
	I1002 08:12:18.029658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="216.748µs"
	
	
	==> kube-proxy [d79280ace516f6a168b3d058dd20448ab4c58d627294a5bad894242f91d00bbe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 08:12:13.496570       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 08:12:13.508279       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.42"]
	E1002 08:12:13.508492       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:12:13.543681       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1002 08:12:13.543801       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 08:12:13.543917       1 server_linux.go:170] "Using iptables Proxier"
	I1002 08:12:13.547251       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:12:13.547526       1 server.go:497] "Version info" version="v1.32.0"
	I1002 08:12:13.547562       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:12:13.549218       1 config.go:199] "Starting service config controller"
	I1002 08:12:13.549258       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 08:12:13.549285       1 config.go:105] "Starting endpoint slice config controller"
	I1002 08:12:13.549288       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 08:12:13.552459       1 config.go:329] "Starting node config controller"
	I1002 08:12:13.552550       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 08:12:13.649512       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1002 08:12:13.649545       1 shared_informer.go:320] Caches are synced for service config
	I1002 08:12:13.653759       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4431b28405feb02a989506a2b5f9c85aebe61518af0d080b545b37f7ec87b9f0] <==
	I1002 08:12:10.533191       1 serving.go:386] Generated self-signed cert in-memory
	W1002 08:12:12.688325       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 08:12:12.688365       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 08:12:12.688375       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 08:12:12.688384       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 08:12:12.746774       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1002 08:12:12.746883       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:12:12.755630       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 08:12:12.757908       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:12:12.757956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 08:12:12.757977       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:12:12.858917       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 08:12:10 test-preload-977743 kubelet[1159]: E1002 08:12:10.934580    1159 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-977743\" not found" node="test-preload-977743"
	Oct 02 08:12:10 test-preload-977743 kubelet[1159]: E1002 08:12:10.935236    1159 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-977743\" not found" node="test-preload-977743"
	Oct 02 08:12:11 test-preload-977743 kubelet[1159]: I1002 08:12:11.025570    1159 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-977743"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.805612    1159 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-977743"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.805727    1159 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-977743"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.805749    1159 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.807690    1159 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.809476    1159 setters.go:602] "Node became not ready" node="test-preload-977743" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T08:12:12Z","lastTransitionTime":"2025-10-02T08:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.815480    1159 apiserver.go:52] "Watching apiserver"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: E1002 08:12:12.820451    1159 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-fjqkc" podUID="8169d3be-577c-440c-8517-782be0c0a2dc"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.840896    1159 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.903408    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/106e56a0-5e14-4fe1-9a5b-4ecc1c61e680-tmp\") pod \"storage-provisioner\" (UID: \"106e56a0-5e14-4fe1-9a5b-4ecc1c61e680\") " pod="kube-system/storage-provisioner"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.903459    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336c1654-6028-4d75-a986-9037ad249142-xtables-lock\") pod \"kube-proxy-r8fdj\" (UID: \"336c1654-6028-4d75-a986-9037ad249142\") " pod="kube-system/kube-proxy-r8fdj"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: I1002 08:12:12.903483    1159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336c1654-6028-4d75-a986-9037ad249142-lib-modules\") pod \"kube-proxy-r8fdj\" (UID: \"336c1654-6028-4d75-a986-9037ad249142\") " pod="kube-system/kube-proxy-r8fdj"
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: E1002 08:12:12.903726    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 08:12:12 test-preload-977743 kubelet[1159]: E1002 08:12:12.903789    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8169d3be-577c-440c-8517-782be0c0a2dc-config-volume podName:8169d3be-577c-440c-8517-782be0c0a2dc nodeName:}" failed. No retries permitted until 2025-10-02 08:12:13.403764509 +0000 UTC m=+5.691721721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8169d3be-577c-440c-8517-782be0c0a2dc-config-volume") pod "coredns-668d6bf9bc-fjqkc" (UID: "8169d3be-577c-440c-8517-782be0c0a2dc") : object "kube-system"/"coredns" not registered
	Oct 02 08:12:13 test-preload-977743 kubelet[1159]: E1002 08:12:13.407470    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 08:12:13 test-preload-977743 kubelet[1159]: E1002 08:12:13.407546    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8169d3be-577c-440c-8517-782be0c0a2dc-config-volume podName:8169d3be-577c-440c-8517-782be0c0a2dc nodeName:}" failed. No retries permitted until 2025-10-02 08:12:14.407533323 +0000 UTC m=+6.695490535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8169d3be-577c-440c-8517-782be0c0a2dc-config-volume") pod "coredns-668d6bf9bc-fjqkc" (UID: "8169d3be-577c-440c-8517-782be0c0a2dc") : object "kube-system"/"coredns" not registered
	Oct 02 08:12:14 test-preload-977743 kubelet[1159]: E1002 08:12:14.416433    1159 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 08:12:14 test-preload-977743 kubelet[1159]: E1002 08:12:14.416499    1159 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8169d3be-577c-440c-8517-782be0c0a2dc-config-volume podName:8169d3be-577c-440c-8517-782be0c0a2dc nodeName:}" failed. No retries permitted until 2025-10-02 08:12:16.416486619 +0000 UTC m=+8.704443842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8169d3be-577c-440c-8517-782be0c0a2dc-config-volume") pod "coredns-668d6bf9bc-fjqkc" (UID: "8169d3be-577c-440c-8517-782be0c0a2dc") : object "kube-system"/"coredns" not registered
	Oct 02 08:12:14 test-preload-977743 kubelet[1159]: I1002 08:12:14.622254    1159 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 02 08:12:17 test-preload-977743 kubelet[1159]: E1002 08:12:17.913734    1159 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759392737913469926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 08:12:17 test-preload-977743 kubelet[1159]: E1002 08:12:17.913795    1159 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759392737913469926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 08:12:27 test-preload-977743 kubelet[1159]: E1002 08:12:27.916252    1159 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759392747915751812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 08:12:27 test-preload-977743 kubelet[1159]: E1002 08:12:27.916598    1159 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759392747915751812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [096b221981be0110e4342d4083983ebc2ab772eef90889f13d2c70cd02b8829f] <==
	I1002 08:12:13.402606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-977743 -n test-preload-977743
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-977743 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-977743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-977743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-977743: (1.024976841s)
--- FAIL: TestPreload (163.46s)

                                                
                                    

Test pass (276/330)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.18
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 2.86
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 92.72
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 219.55
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.6
36 TestAddons/parallel/RegistryCreds 0.7
38 TestAddons/parallel/InspektorGadget 6.32
39 TestAddons/parallel/MetricsServer 6.85
42 TestAddons/parallel/Headlamp 81.26
43 TestAddons/parallel/CloudSpanner 6.63
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
48 TestAddons/StoppedEnableDisable 69.62
49 TestCertOptions 61.72
50 TestCertExpiration 268
52 TestForceSystemdFlag 67.2
53 TestForceSystemdEnv 68.02
55 TestKVMDriverInstallOrUpdate 0.6
59 TestErrorSpam/setup 40.15
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.87
62 TestErrorSpam/pause 1.83
63 TestErrorSpam/unpause 1.95
64 TestErrorSpam/stop 5.01
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 85.84
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 35.43
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
76 TestFunctional/serial/CacheCmd/cache/add_local 1.17
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 36.14
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.56
87 TestFunctional/serial/LogsFileCmd 1.52
88 TestFunctional/serial/InvalidService 5.11
90 TestFunctional/parallel/ConfigCmd 0.35
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.83
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.45
103 TestFunctional/parallel/CpCmd 1.45
105 TestFunctional/parallel/FileSync 0.21
106 TestFunctional/parallel/CertSync 1.24
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
114 TestFunctional/parallel/License 0.27
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
126 TestFunctional/parallel/ProfileCmd/profile_list 0.34
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
128 TestFunctional/parallel/MountCmd/any-port 70.34
129 TestFunctional/parallel/MountCmd/specific-port 1.89
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.24
131 TestFunctional/parallel/Version/short 0.05
132 TestFunctional/parallel/Version/components 0.47
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
137 TestFunctional/parallel/ImageCommands/ImageBuild 5.19
138 TestFunctional/parallel/ImageCommands/Setup 0.41
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
149 TestFunctional/parallel/ServiceCmd/List 1.27
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 219.06
162 TestMultiControlPlane/serial/DeployApp 6.33
163 TestMultiControlPlane/serial/PingHostFromPods 1.27
164 TestMultiControlPlane/serial/AddWorkerNode 43.95
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
167 TestMultiControlPlane/serial/CopyFile 13.82
168 TestMultiControlPlane/serial/StopSecondaryNode 83.63
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
170 TestMultiControlPlane/serial/RestartSecondaryNode 37.42
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.19
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 385.49
173 TestMultiControlPlane/serial/DeleteSecondaryNode 19.21
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
175 TestMultiControlPlane/serial/StopCluster 252.39
176 TestMultiControlPlane/serial/RestartCluster 110.35
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
178 TestMultiControlPlane/serial/AddSecondaryNode 82.78
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestJSONOutput/start/Command 82.81
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.7
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 87.9
215 TestMountStart/serial/StartWithMountFirst 23.43
216 TestMountStart/serial/VerifyMountFirst 0.4
217 TestMountStart/serial/StartWithMountSecond 23.53
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.34
222 TestMountStart/serial/RestartStopped 20.14
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 136.32
227 TestMultiNode/serial/DeployApp2Nodes 6.47
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 44.09
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.6
232 TestMultiNode/serial/CopyFile 7.6
233 TestMultiNode/serial/StopNode 2.56
234 TestMultiNode/serial/StartAfterStop 38.62
235 TestMultiNode/serial/RestartKeepsNodes 285.97
236 TestMultiNode/serial/DeleteNode 2.89
237 TestMultiNode/serial/StopMultiNode 162.76
238 TestMultiNode/serial/RestartMultiNode 95.41
239 TestMultiNode/serial/ValidateNameConflict 46.15
246 TestScheduledStopUnix 113.87
250 TestRunningBinaryUpgrade 161.48
252 TestKubernetesUpgrade 179.36
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 89.52
265 TestPause/serial/Start 100.94
266 TestNoKubernetes/serial/StartWithStopK8s 26.09
267 TestNoKubernetes/serial/Start 50.27
275 TestNetworkPlugins/group/false 3.57
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
277 TestNoKubernetes/serial/ProfileList 6.04
281 TestPause/serial/SecondStartNoReconfiguration 46.06
282 TestStoppedBinaryUpgrade/Setup 0.49
283 TestNoKubernetes/serial/Stop 1.41
284 TestStoppedBinaryUpgrade/Upgrade 135.19
285 TestNoKubernetes/serial/StartNoArgs 52.6
286 TestPause/serial/Pause 0.86
287 TestPause/serial/VerifyStatus 0.31
288 TestPause/serial/Unpause 0.84
289 TestPause/serial/PauseAgain 1.04
290 TestPause/serial/DeletePaused 0.95
291 TestPause/serial/VerifyDeletedResources 7.31
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
294 TestStartStop/group/old-k8s-version/serial/FirstStart 112.4
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
297 TestStartStop/group/embed-certs/serial/FirstStart 107.19
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.63
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.36
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
302 TestStartStop/group/embed-certs/serial/DeployApp 9.31
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
304 TestStartStop/group/old-k8s-version/serial/Stop 85.64
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.08
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
308 TestStartStop/group/embed-certs/serial/Stop 87.64
310 TestStartStop/group/newest-cni/serial/FirstStart 46.23
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/old-k8s-version/serial/SecondStart 57.01
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 71.47
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
316 TestStartStop/group/embed-certs/serial/SecondStart 83.65
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.5
319 TestStartStop/group/newest-cni/serial/Stop 8.49
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
321 TestStartStop/group/newest-cni/serial/SecondStart 58.1
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
325 TestStartStop/group/old-k8s-version/serial/Pause 4.2
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
328 TestStartStop/group/no-preload/serial/FirstStart 106.78
329 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
331 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
332 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.06
333 TestNetworkPlugins/group/auto/Start 89.12
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.19
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
338 TestStartStop/group/newest-cni/serial/Pause 4.25
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
340 TestStartStop/group/embed-certs/serial/Pause 3.32
341 TestNetworkPlugins/group/kindnet/Start 78.21
342 TestNetworkPlugins/group/enable-default-cni/Start 120.47
343 TestStartStop/group/no-preload/serial/DeployApp 10.41
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestNetworkPlugins/group/auto/KubeletFlags 0.24
346 TestNetworkPlugins/group/auto/NetCatPod 11.33
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
348 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.39
350 TestStartStop/group/no-preload/serial/Stop 89.88
351 TestNetworkPlugins/group/auto/DNS 0.16
352 TestNetworkPlugins/group/auto/Localhost 0.14
353 TestNetworkPlugins/group/auto/HairPin 0.13
354 TestNetworkPlugins/group/kindnet/DNS 0.15
355 TestNetworkPlugins/group/kindnet/Localhost 0.13
356 TestNetworkPlugins/group/kindnet/HairPin 0.13
357 TestNetworkPlugins/group/flannel/Start 69.81
358 TestNetworkPlugins/group/calico/Start 92.41
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.58
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
364 TestNetworkPlugins/group/bridge/Start 93.1
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
366 TestNetworkPlugins/group/flannel/ControllerPod 6.01
367 TestStartStop/group/no-preload/serial/SecondStart 66.06
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
369 TestNetworkPlugins/group/flannel/NetCatPod 12.28
370 TestNetworkPlugins/group/flannel/DNS 0.17
371 TestNetworkPlugins/group/flannel/Localhost 0.13
372 TestNetworkPlugins/group/flannel/HairPin 0.17
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/calico/KubeletFlags 0.26
375 TestNetworkPlugins/group/calico/NetCatPod 12.67
376 TestNetworkPlugins/group/custom-flannel/Start 70.6
377 TestNetworkPlugins/group/calico/DNS 0.2
378 TestNetworkPlugins/group/calico/Localhost 0.17
379 TestNetworkPlugins/group/calico/HairPin 0.14
380 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.12
381 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
382 TestNetworkPlugins/group/bridge/NetCatPod 10.28
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
385 TestStartStop/group/no-preload/serial/Pause 3.1
386 TestNetworkPlugins/group/bridge/DNS 0.19
387 TestNetworkPlugins/group/bridge/Localhost 0.13
388 TestNetworkPlugins/group/bridge/HairPin 0.22
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
391 TestNetworkPlugins/group/custom-flannel/DNS 0.15
392 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
393 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (6.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-760196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-760196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (6.180468016s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 06:57:08.053847  566080 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 06:57:08.053957  566080 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-760196
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-760196: exit status 85 (64.483493ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-760196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:57:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:57:01.917036  566092 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:57:01.917167  566092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:01.917177  566092 out.go:374] Setting ErrFile to fd 2...
	I1002 06:57:01.917182  566092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:01.917428  566092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	W1002 06:57:01.917579  566092 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21643-562157/.minikube/config/config.json: open /home/jenkins/minikube-integration/21643-562157/.minikube/config/config.json: no such file or directory
	I1002 06:57:01.918078  566092 out.go:368] Setting JSON to true
	I1002 06:57:01.919038  566092 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49172,"bootTime":1759339050,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:57:01.919131  566092 start.go:140] virtualization: kvm guest
	I1002 06:57:01.921304  566092 out.go:99] [download-only-760196] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1002 06:57:01.921462  566092 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 06:57:01.921488  566092 notify.go:220] Checking for updates...
	I1002 06:57:01.922766  566092 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:57:01.924274  566092 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:57:01.925628  566092 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:01.926850  566092 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:01.928106  566092 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 06:57:01.930347  566092 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:57:01.930689  566092 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:57:01.965444  566092 out.go:99] Using the kvm2 driver based on user configuration
	I1002 06:57:01.965482  566092 start.go:304] selected driver: kvm2
	I1002 06:57:01.965488  566092 start.go:924] validating driver "kvm2" against <nil>
	I1002 06:57:01.965864  566092 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:01.965950  566092 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:01.981071  566092 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:01.981114  566092 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21643-562157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 06:57:01.995243  566092 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 06:57:01.995296  566092 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:57:01.995913  566092 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1002 06:57:01.996106  566092 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:57:01.996132  566092 cni.go:84] Creating CNI manager for ""
	I1002 06:57:01.996209  566092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 06:57:01.996220  566092 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 06:57:01.996283  566092 start.go:348] cluster config:
	{Name:download-only-760196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-760196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:57:01.996468  566092 iso.go:125] acquiring lock: {Name:mkf098c9edb59acf17bed04e42333d4ed092b943 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:57:01.998263  566092 out.go:99] Downloading VM boot image ...
	I1002 06:57:01.998310  566092 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21643-562157/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 06:57:04.993312  566092 out.go:99] Starting "download-only-760196" primary control-plane node in "download-only-760196" cluster
	I1002 06:57:04.993356  566092 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:57:05.012097  566092 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 06:57:05.012127  566092 cache.go:58] Caching tarball of preloaded images
	I1002 06:57:05.012293  566092 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:57:05.013929  566092 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 06:57:05.013950  566092 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 06:57:05.034842  566092 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1002 06:57:05.034966  566092 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-760196 host does not exist
	  To start a cluster, run: "minikube start -p download-only-760196"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-760196
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (2.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-169608 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-169608 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2.858896597s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (2.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 06:57:11.262525  566080 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 06:57:11.262580  566080 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-562157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-169608
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-169608: exit status 85 (63.401764ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-760196 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ delete  │ -p download-only-760196                                                                                                                                                                             │ download-only-760196 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │ 02 Oct 25 06:57 UTC │
	│ start   │ -o=json --download-only -p download-only-169608 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-169608 │ jenkins │ v1.37.0 │ 02 Oct 25 06:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:57:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:57:08.444638  566277 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:57:08.444886  566277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:08.444894  566277 out.go:374] Setting ErrFile to fd 2...
	I1002 06:57:08.444897  566277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:57:08.445101  566277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 06:57:08.445658  566277 out.go:368] Setting JSON to true
	I1002 06:57:08.446521  566277 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":49178,"bootTime":1759339050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:57:08.446611  566277 start.go:140] virtualization: kvm guest
	I1002 06:57:08.448422  566277 out.go:99] [download-only-169608] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:57:08.448571  566277 notify.go:220] Checking for updates...
	I1002 06:57:08.449906  566277 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:57:08.451330  566277 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:57:08.452635  566277 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 06:57:08.457715  566277 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 06:57:08.458966  566277 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-169608 host does not exist
	  To start a cluster, run: "minikube start -p download-only-169608"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-169608
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 06:57:11.865835  566080 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-257523 --alsologtostderr --binary-mirror http://127.0.0.1:33567 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-257523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-257523
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (92.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-614627 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-614627 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m31.817678514s)
helpers_test.go:175: Cleaning up "offline-crio-614627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-614627
--- PASS: TestOffline (92.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-535714
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-535714: exit status 85 (52.220965ms)

                                                
                                                
-- stdout --
	* Profile "addons-535714" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-535714"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-535714
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-535714: exit status 85 (53.944088ms)

                                                
                                                
-- stdout --
	* Profile "addons-535714" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-535714"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (219.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-535714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-535714 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m39.548301674s)
--- PASS: TestAddons/Setup (219.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-535714 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-535714 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-535714 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-535714 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1dbf8075-8246-45d0-ae37-79da4f9f9d3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1dbf8075-8246-45d0-ae37-79da4f9f9d3b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005088839s
addons_test.go:694: (dbg) Run:  kubectl --context addons-535714 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-535714 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-535714 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.60s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.130841ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-535714
addons_test.go:332: (dbg) Run:  kubectl --context addons-535714 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-2hn79" [06f70d92-86d6-4308-912f-5496d2127813] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005212355s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.562279ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-pj9lt" [7299a5c5-c919-447b-b35c-dd1a63cf17bf] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004057598s
addons_test.go:463: (dbg) Run:  kubectl --context addons-535714 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (81.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-535714 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-535714 --alsologtostderr -v=1: (1.280118382s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-h7d97" [77fe7a55-11e4-4227-a109-40f35a78ecd2] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-h7d97" [77fe7a55-11e4-4227-a109-40f35a78ecd2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-h7d97" [77fe7a55-11e4-4227-a109-40f35a78ecd2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m14.020818195s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-535714 addons disable headlamp --alsologtostderr -v=1: (5.959184824s)
--- PASS: TestAddons/parallel/Headlamp (81.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-hh72s" [90b98e30-4d59-46a7-a911-3e347c8cffe8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004935076s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-pvvr6" [ea55a383-d022-4e59-a613-1708762b6fdb] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003707811s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (69.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-535714
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-535714: (1m9.329963389s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-535714
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-535714
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-535714
--- PASS: TestAddons/StoppedEnableDisable (69.62s)

                                                
                                    
x
+
TestCertOptions (61.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-876990 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-876990 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.958918545s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-876990 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-876990 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-876990 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-876990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-876990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-876990: (2.249735208s)
--- PASS: TestCertOptions (61.72s)

                                                
                                    
x
+
TestCertExpiration (268s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-805578 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-805578 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.50707223s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-805578 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-805578 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.61460418s)
helpers_test.go:175: Cleaning up "cert-expiration-805578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-805578
--- PASS: TestCertExpiration (268.00s)

                                                
                                    
x
+
TestForceSystemdFlag (67.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-022022 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-022022 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.006711175s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-022022 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
E1002 08:19:18.194701  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:175: Cleaning up "force-systemd-flag-022022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-022022
--- PASS: TestForceSystemdFlag (67.20s)

                                                
                                    
x
+
TestForceSystemdEnv (68.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-640302 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-640302 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.107414355s)
helpers_test.go:175: Cleaning up "force-systemd-env-640302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-640302
--- PASS: TestForceSystemdEnv (68.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.6s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1002 08:18:57.122870  566080 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 08:18:57.123029  566080 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1922610067/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 08:18:57.157833  566080 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1922610067/001/docker-machine-driver-kvm2 version is 1.1.1
W1002 08:18:57.157896  566080 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1002 08:18:57.158091  566080 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1002 08:18:57.158177  566080 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1922610067/001/docker-machine-driver-kvm2
I1002 08:18:57.552256  566080 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1922610067/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 08:18:57.570089  566080 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1922610067/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.60s)

                                                
                                    
x
+
TestErrorSpam/setup (40.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-664024 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-664024 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 07:10:52.841551  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:52.847947  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:52.859318  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:52.880766  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:52.922264  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:53.003784  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:53.165342  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:53.487087  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:54.129094  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:55.411420  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:10:57.975327  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:11:03.097097  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:11:13.339010  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-664024 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-664024 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.1513051s)
--- PASS: TestErrorSpam/setup (40.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (5.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 stop: (2.182051419s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 stop: (1.22466633s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-664024 --log_dir /tmp/nospam-664024 stop: (1.606075783s)
--- PASS: TestErrorSpam/stop (5.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21643-562157/.minikube/files/etc/test/nested/copy/566080/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365308 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 07:11:33.821055  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:12:14.783534  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-365308 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.837716456s)
--- PASS: TestFunctional/serial/StartWithProxy (85.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 07:12:51.247310  566080 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365308 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-365308 --alsologtostderr -v=8: (35.426814423s)
functional_test.go:678: soft start took 35.427692305s for "functional-365308" cluster.
I1002 07:13:26.674560  566080 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (35.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-365308 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 cache add registry.k8s.io/pause:3.1: (1.068995924s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 cache add registry.k8s.io/pause:3.3: (1.086641561s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 cache add registry.k8s.io/pause:latest: (1.095307274s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-365308 /tmp/TestFunctionalserialCacheCmdcacheadd_local3332899173/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cache add minikube-local-cache-test:functional-365308
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cache delete minikube-local-cache-test:functional-365308
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-365308
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.575146ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 cache reload: (1.002225455s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 kubectl -- --context functional-365308 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-365308 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365308 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 07:13:36.707334  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-365308 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.14406947s)
functional_test.go:776: restart took 36.144207318s for "functional-365308" cluster.
I1002 07:14:09.826563  566080 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-365308 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 logs: (1.563945566s)
--- PASS: TestFunctional/serial/LogsCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 logs --file /tmp/TestFunctionalserialLogsFileCmd4021163791/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 logs --file /tmp/TestFunctionalserialLogsFileCmd4021163791/001/logs.txt: (1.518610628s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-365308 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-365308
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-365308: exit status 115 (716.108878ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.84:31049 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-365308 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-365308 delete -f testdata/invalidsvc.yaml: (1.197612791s)
--- PASS: TestFunctional/serial/InvalidService (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 config get cpus: exit status 14 (59.868403ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 config get cpus: exit status 14 (54.009126ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365308 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-365308 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (140.567526ms)

                                                
                                                
-- stdout --
	* [functional-365308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:15:35.046492  581064 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:15:35.046768  581064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.046778  581064 out.go:374] Setting ErrFile to fd 2...
	I1002 07:15:35.046785  581064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:35.047019  581064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:15:35.047510  581064 out.go:368] Setting JSON to false
	I1002 07:15:35.048544  581064 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":50285,"bootTime":1759339050,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:15:35.048647  581064 start.go:140] virtualization: kvm guest
	I1002 07:15:35.050724  581064 out.go:179] * [functional-365308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 07:15:35.052315  581064 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:15:35.052300  581064 notify.go:220] Checking for updates...
	I1002 07:15:35.054875  581064 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:15:35.056331  581064 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 07:15:35.058668  581064 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 07:15:35.060046  581064 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:15:35.061438  581064 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:15:35.063064  581064 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:15:35.063558  581064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.063612  581064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.078743  581064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1002 07:15:35.079241  581064 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.079819  581064 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.079841  581064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.080302  581064 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.080524  581064 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.080811  581064 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:15:35.081266  581064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:35.081343  581064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:35.095231  581064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I1002 07:15:35.095632  581064 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:35.096093  581064 main.go:141] libmachine: Using API Version  1
	I1002 07:15:35.096115  581064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:35.096495  581064 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:35.096724  581064 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:35.128904  581064 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 07:15:35.130325  581064 start.go:304] selected driver: kvm2
	I1002 07:15:35.130345  581064 start.go:924] validating driver "kvm2" against &{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:35.130463  581064 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:15:35.132761  581064 out.go:203] 
	W1002 07:15:35.133881  581064 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 07:15:35.135022  581064 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365308 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365308 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-365308 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (142.898771ms)

                                                
                                                
-- stdout --
	* [functional-365308] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:15:34.904379  581036 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:15:34.904494  581036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:34.904503  581036 out.go:374] Setting ErrFile to fd 2...
	I1002 07:15:34.904507  581036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:15:34.904919  581036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:15:34.905472  581036 out.go:368] Setting JSON to false
	I1002 07:15:34.906622  581036 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":50285,"bootTime":1759339050,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 07:15:34.906739  581036 start.go:140] virtualization: kvm guest
	I1002 07:15:34.909664  581036 out.go:179] * [functional-365308] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 07:15:34.911331  581036 notify.go:220] Checking for updates...
	I1002 07:15:34.911393  581036 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:15:34.912777  581036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:15:34.914354  581036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 07:15:34.915993  581036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 07:15:34.917584  581036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 07:15:34.919255  581036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:15:34.921356  581036 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:15:34.921997  581036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:34.922101  581036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:34.937227  581036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I1002 07:15:34.937770  581036 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:34.938311  581036 main.go:141] libmachine: Using API Version  1
	I1002 07:15:34.938338  581036 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:34.938775  581036 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:34.939010  581036 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:34.939311  581036 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:15:34.939769  581036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:15:34.939822  581036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:15:34.954077  581036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I1002 07:15:34.954629  581036 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:15:34.955098  581036 main.go:141] libmachine: Using API Version  1
	I1002 07:15:34.955124  581036 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:15:34.955505  581036 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:15:34.955781  581036 main.go:141] libmachine: (functional-365308) Calling .DriverName
	I1002 07:15:34.989019  581036 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1002 07:15:34.990095  581036 start.go:304] selected driver: kvm2
	I1002 07:15:34.990114  581036 start.go:924] validating driver "kvm2" against &{Name:functional-365308 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-365308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:15:34.990229  581036 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:15:34.992221  581036 out.go:203] 
	W1002 07:15:34.993439  581036 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 07:15:34.994642  581036 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh -n functional-365308 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cp functional-365308:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1454325528/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh -n functional-365308 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh -n functional-365308 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/566080/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo cat /etc/test/nested/copy/566080/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/566080.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo cat /etc/ssl/certs/566080.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/566080.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo cat /usr/share/ca-certificates/566080.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5660802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo cat /etc/ssl/certs/5660802.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5660802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo cat /usr/share/ca-certificates/5660802.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-365308 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh "sudo systemctl is-active docker": exit status 1 (202.373287ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh "sudo systemctl is-active containerd": exit status 1 (209.927315ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "285.251423ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "51.609105ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "284.512884ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "49.42523ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (70.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdany-port302975222/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759389260561385241" to /tmp/TestFunctionalparallelMountCmdany-port302975222/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759389260561385241" to /tmp/TestFunctionalparallelMountCmdany-port302975222/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759389260561385241" to /tmp/TestFunctionalparallelMountCmdany-port302975222/001/test-1759389260561385241
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.429345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:14:20.776196  566080 retry.go:31] will retry after 389.247972ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 07:14 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 07:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 07:14 test-1759389260561385241
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh cat /mount-9p/test-1759389260561385241
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-365308 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6a35b485-9767-4ffe-854c-046f59e75070] Pending
helpers_test.go:352: "busybox-mount" [6a35b485-9767-4ffe-854c-046f59e75070] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6a35b485-9767-4ffe-854c-046f59e75070] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6a35b485-9767-4ffe-854c-046f59e75070] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m8.003983118s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-365308 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdany-port302975222/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (70.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdspecific-port1103249159/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.227648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:15:31.105550  566080 retry.go:31] will retry after 663.09359ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdspecific-port1103249159/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh "sudo umount -f /mount-9p": exit status 1 (202.406773ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-365308 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdspecific-port1103249159/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T" /mount1: exit status 1 (220.924805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:15:33.009439  566080 retry.go:31] will retry after 358.466045ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-365308 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365308 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2750502758/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365308 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-365308
localhost/kicbase/echo-server:functional-365308
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365308 image ls --format short --alsologtostderr:
I1002 07:20:38.463457  583497 out.go:360] Setting OutFile to fd 1 ...
I1002 07:20:38.463747  583497 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:38.463760  583497 out.go:374] Setting ErrFile to fd 2...
I1002 07:20:38.463764  583497 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:38.463950  583497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
I1002 07:20:38.464564  583497 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:38.464681  583497 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:38.465058  583497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:38.465125  583497 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:38.480585  583497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
I1002 07:20:38.481084  583497 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:38.481672  583497 main.go:141] libmachine: Using API Version  1
I1002 07:20:38.481698  583497 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:38.482045  583497 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:38.482271  583497 main.go:141] libmachine: (functional-365308) Calling .GetState
I1002 07:20:38.484250  583497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:38.484314  583497 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:38.498540  583497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
I1002 07:20:38.499036  583497 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:38.499527  583497 main.go:141] libmachine: Using API Version  1
I1002 07:20:38.499547  583497 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:38.499944  583497 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:38.500181  583497 main.go:141] libmachine: (functional-365308) Calling .DriverName
I1002 07:20:38.500414  583497 ssh_runner.go:195] Run: systemctl --version
I1002 07:20:38.500441  583497 main.go:141] libmachine: (functional-365308) Calling .GetSSHHostname
I1002 07:20:38.503359  583497 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:38.503884  583497 main.go:141] libmachine: (functional-365308) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f1:3d", ip: ""} in network mk-functional-365308: {Iface:virbr1 ExpiryTime:2025-10-02 08:11:41 +0000 UTC Type:0 Mac:52:54:00:64:f1:3d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-365308 Clientid:01:52:54:00:64:f1:3d}
I1002 07:20:38.503913  583497 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined IP address 192.168.39.84 and MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:38.504102  583497 main.go:141] libmachine: (functional-365308) Calling .GetSSHPort
I1002 07:20:38.504305  583497 main.go:141] libmachine: (functional-365308) Calling .GetSSHKeyPath
I1002 07:20:38.504474  583497 main.go:141] libmachine: (functional-365308) Calling .GetSSHUsername
I1002 07:20:38.504648  583497 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/functional-365308/id_rsa Username:docker}
I1002 07:20:38.591429  583497 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 07:20:38.632634  583497 main.go:141] libmachine: Making call to close driver server
I1002 07:20:38.632648  583497 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:38.632967  583497 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:38.632989  583497 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:20:38.632997  583497 main.go:141] libmachine: Making call to close driver server
I1002 07:20:38.633004  583497 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:38.633063  583497 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
I1002 07:20:38.633278  583497 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:38.633296  583497 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365308 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-365308  │ 69e9aabc2f656 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-365308  │ 9121dd7d33b3f │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ localhost/kicbase/echo-server           │ functional-365308  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365308 image ls --format table --alsologtostderr:
I1002 07:20:44.334108  583662 out.go:360] Setting OutFile to fd 1 ...
I1002 07:20:44.334394  583662 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:44.334400  583662 out.go:374] Setting ErrFile to fd 2...
I1002 07:20:44.334405  583662 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:44.334720  583662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
I1002 07:20:44.335601  583662 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:44.335728  583662 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:44.336274  583662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:44.336350  583662 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:44.351257  583662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
I1002 07:20:44.351784  583662 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:44.352409  583662 main.go:141] libmachine: Using API Version  1
I1002 07:20:44.352445  583662 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:44.352888  583662 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:44.353103  583662 main.go:141] libmachine: (functional-365308) Calling .GetState
I1002 07:20:44.355325  583662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:44.355368  583662 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:44.370853  583662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
I1002 07:20:44.371444  583662 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:44.371992  583662 main.go:141] libmachine: Using API Version  1
I1002 07:20:44.372035  583662 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:44.372399  583662 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:44.372608  583662 main.go:141] libmachine: (functional-365308) Calling .DriverName
I1002 07:20:44.372808  583662 ssh_runner.go:195] Run: systemctl --version
I1002 07:20:44.372838  583662 main.go:141] libmachine: (functional-365308) Calling .GetSSHHostname
I1002 07:20:44.375958  583662 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:44.376374  583662 main.go:141] libmachine: (functional-365308) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f1:3d", ip: ""} in network mk-functional-365308: {Iface:virbr1 ExpiryTime:2025-10-02 08:11:41 +0000 UTC Type:0 Mac:52:54:00:64:f1:3d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-365308 Clientid:01:52:54:00:64:f1:3d}
I1002 07:20:44.376415  583662 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined IP address 192.168.39.84 and MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:44.376547  583662 main.go:141] libmachine: (functional-365308) Calling .GetSSHPort
I1002 07:20:44.376721  583662 main.go:141] libmachine: (functional-365308) Calling .GetSSHKeyPath
I1002 07:20:44.376924  583662 main.go:141] libmachine: (functional-365308) Calling .GetSSHUsername
I1002 07:20:44.377131  583662 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/functional-365308/id_rsa Username:docker}
I1002 07:20:44.464413  583662 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 07:20:44.518821  583662 main.go:141] libmachine: Making call to close driver server
I1002 07:20:44.518845  583662 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:44.519211  583662 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:44.519230  583662 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:20:44.519244  583662 main.go:141] libmachine: Making call to close driver server
I1002 07:20:44.519253  583662 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:44.519501  583662 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:44.519526  583662 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:20:44.519551  583662 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365308 image ls --format json --alsologtostderr:
[{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@s
ha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"69e9aabc2f656ed3834ab1f72fa72a215422bfdf82c005ade120a290014803d4","repoDigests":["localhost/minikube-local-cache-test@sha256:e34c659fd48a6ef3f3b3ea56e3952409cd3e0471e4204d22b0fa2f7ac0852f15"],"repoTags":["localhost/minikube-local-cache-test:functional-365308"],"size":"3330"},{"id":"9121dd7d33b3f1cbb8416ed6450c64e5898948f65ed8dd3642f21aec7d9efcea","repoDigests":["localhost/my-image@sha256:44ee91b844612e55f861fe5ce4a5852e7c3f35de85b9a91801bda4b137c354be"],"repoTags":["localhost/my-image:functional-365308"],"size":"1468600"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/c
oredns:v1.12.1"],"size":"76103547"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec
52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-365308"],"size":"4943877"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104
e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"b96a44e9e81bf7de48ca94557731df119a24611971d9c2d0208eb9ef5604b5e2","repoDigests":["docker.io/library/a0de5dd43e348e3404a13cc42c34414a40295d6e84db8e346297e5db50d413f4-tmp@sha256:00cc6563a82bd0036d79712c941bddd3167b28b9cd23e788d2f29497a364d5ac"],"repoTags":[],"size":"1466018"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31"
,"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365308 image ls --format json --alsologtostderr:
I1002 07:20:44.114047  583639 out.go:360] Setting OutFile to fd 1 ...
I1002 07:20:44.114327  583639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:44.114337  583639 out.go:374] Setting ErrFile to fd 2...
I1002 07:20:44.114341  583639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:44.114573  583639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
I1002 07:20:44.115166  583639 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:44.115266  583639 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:44.115620  583639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:44.115687  583639 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:44.129958  583639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
I1002 07:20:44.130587  583639 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:44.131245  583639 main.go:141] libmachine: Using API Version  1
I1002 07:20:44.131271  583639 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:44.131646  583639 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:44.131841  583639 main.go:141] libmachine: (functional-365308) Calling .GetState
I1002 07:20:44.134122  583639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:44.134184  583639 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:44.147995  583639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
I1002 07:20:44.148530  583639 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:44.149087  583639 main.go:141] libmachine: Using API Version  1
I1002 07:20:44.149111  583639 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:44.149518  583639 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:44.149725  583639 main.go:141] libmachine: (functional-365308) Calling .DriverName
I1002 07:20:44.149939  583639 ssh_runner.go:195] Run: systemctl --version
I1002 07:20:44.149969  583639 main.go:141] libmachine: (functional-365308) Calling .GetSSHHostname
I1002 07:20:44.152922  583639 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:44.153407  583639 main.go:141] libmachine: (functional-365308) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f1:3d", ip: ""} in network mk-functional-365308: {Iface:virbr1 ExpiryTime:2025-10-02 08:11:41 +0000 UTC Type:0 Mac:52:54:00:64:f1:3d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-365308 Clientid:01:52:54:00:64:f1:3d}
I1002 07:20:44.153437  583639 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined IP address 192.168.39.84 and MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:44.153607  583639 main.go:141] libmachine: (functional-365308) Calling .GetSSHPort
I1002 07:20:44.153817  583639 main.go:141] libmachine: (functional-365308) Calling .GetSSHKeyPath
I1002 07:20:44.154001  583639 main.go:141] libmachine: (functional-365308) Calling .GetSSHUsername
I1002 07:20:44.154158  583639 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/functional-365308/id_rsa Username:docker}
I1002 07:20:44.239330  583639 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 07:20:44.278339  583639 main.go:141] libmachine: Making call to close driver server
I1002 07:20:44.278358  583639 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:44.278687  583639 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:44.278708  583639 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:20:44.278720  583639 main.go:141] libmachine: Making call to close driver server
I1002 07:20:44.278730  583639 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:44.278728  583639 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
I1002 07:20:44.278958  583639 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:44.278971  583639 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365308 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-365308
size: "4943877"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 69e9aabc2f656ed3834ab1f72fa72a215422bfdf82c005ade120a290014803d4
repoDigests:
- localhost/minikube-local-cache-test@sha256:e34c659fd48a6ef3f3b3ea56e3952409cd3e0471e4204d22b0fa2f7ac0852f15
repoTags:
- localhost/minikube-local-cache-test:functional-365308
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365308 image ls --format yaml --alsologtostderr:
I1002 07:20:38.686798  583521 out.go:360] Setting OutFile to fd 1 ...
I1002 07:20:38.687066  583521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:38.687075  583521 out.go:374] Setting ErrFile to fd 2...
I1002 07:20:38.687079  583521 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:38.687270  583521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
I1002 07:20:38.687867  583521 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:38.687985  583521 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:38.688368  583521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:38.688437  583521 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:38.702513  583521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40409
I1002 07:20:38.703122  583521 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:38.703781  583521 main.go:141] libmachine: Using API Version  1
I1002 07:20:38.703799  583521 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:38.704202  583521 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:38.704490  583521 main.go:141] libmachine: (functional-365308) Calling .GetState
I1002 07:20:38.706751  583521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:38.706802  583521 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:38.720873  583521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
I1002 07:20:38.721422  583521 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:38.721931  583521 main.go:141] libmachine: Using API Version  1
I1002 07:20:38.721951  583521 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:38.722353  583521 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:38.722527  583521 main.go:141] libmachine: (functional-365308) Calling .DriverName
I1002 07:20:38.722748  583521 ssh_runner.go:195] Run: systemctl --version
I1002 07:20:38.722783  583521 main.go:141] libmachine: (functional-365308) Calling .GetSSHHostname
I1002 07:20:38.725749  583521 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:38.726235  583521 main.go:141] libmachine: (functional-365308) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f1:3d", ip: ""} in network mk-functional-365308: {Iface:virbr1 ExpiryTime:2025-10-02 08:11:41 +0000 UTC Type:0 Mac:52:54:00:64:f1:3d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-365308 Clientid:01:52:54:00:64:f1:3d}
I1002 07:20:38.726265  583521 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined IP address 192.168.39.84 and MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:38.726400  583521 main.go:141] libmachine: (functional-365308) Calling .GetSSHPort
I1002 07:20:38.726544  583521 main.go:141] libmachine: (functional-365308) Calling .GetSSHKeyPath
I1002 07:20:38.726692  583521 main.go:141] libmachine: (functional-365308) Calling .GetSSHUsername
I1002 07:20:38.726833  583521 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/functional-365308/id_rsa Username:docker}
I1002 07:20:38.816756  583521 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 07:20:38.865863  583521 main.go:141] libmachine: Making call to close driver server
I1002 07:20:38.865893  583521 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:38.866232  583521 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:38.866258  583521 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:20:38.866268  583521 main.go:141] libmachine: Making call to close driver server
I1002 07:20:38.866276  583521 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:38.866282  583521 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
I1002 07:20:38.866538  583521 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:38.866556  583521 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365308 ssh pgrep buildkitd: exit status 1 (201.955642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image build -t localhost/my-image:functional-365308 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 image build -t localhost/my-image:functional-365308 testdata/build --alsologtostderr: (4.762435836s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365308 image build -t localhost/my-image:functional-365308 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b96a44e9e81
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-365308
--> 9121dd7d33b
Successfully tagged localhost/my-image:functional-365308
9121dd7d33b3f1cbb8416ed6450c64e5898948f65ed8dd3642f21aec7d9efcea
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365308 image build -t localhost/my-image:functional-365308 testdata/build --alsologtostderr:
I1002 07:20:39.122485  583575 out.go:360] Setting OutFile to fd 1 ...
I1002 07:20:39.122773  583575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:39.122784  583575 out.go:374] Setting ErrFile to fd 2...
I1002 07:20:39.122798  583575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:20:39.123007  583575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
I1002 07:20:39.123652  583575 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:39.125078  583575 config.go:182] Loaded profile config "functional-365308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:20:39.125471  583575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:39.125527  583575 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:39.139675  583575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
I1002 07:20:39.140205  583575 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:39.140788  583575 main.go:141] libmachine: Using API Version  1
I1002 07:20:39.140815  583575 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:39.141201  583575 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:39.141410  583575 main.go:141] libmachine: (functional-365308) Calling .GetState
I1002 07:20:39.143377  583575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 07:20:39.143415  583575 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 07:20:39.157276  583575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
I1002 07:20:39.157740  583575 main.go:141] libmachine: () Calling .GetVersion
I1002 07:20:39.158309  583575 main.go:141] libmachine: Using API Version  1
I1002 07:20:39.158339  583575 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 07:20:39.158712  583575 main.go:141] libmachine: () Calling .GetMachineName
I1002 07:20:39.158941  583575 main.go:141] libmachine: (functional-365308) Calling .DriverName
I1002 07:20:39.159196  583575 ssh_runner.go:195] Run: systemctl --version
I1002 07:20:39.159228  583575 main.go:141] libmachine: (functional-365308) Calling .GetSSHHostname
I1002 07:20:39.162232  583575 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:39.162668  583575 main.go:141] libmachine: (functional-365308) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f1:3d", ip: ""} in network mk-functional-365308: {Iface:virbr1 ExpiryTime:2025-10-02 08:11:41 +0000 UTC Type:0 Mac:52:54:00:64:f1:3d Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:functional-365308 Clientid:01:52:54:00:64:f1:3d}
I1002 07:20:39.162701  583575 main.go:141] libmachine: (functional-365308) DBG | domain functional-365308 has defined IP address 192.168.39.84 and MAC address 52:54:00:64:f1:3d in network mk-functional-365308
I1002 07:20:39.162862  583575 main.go:141] libmachine: (functional-365308) Calling .GetSSHPort
I1002 07:20:39.163040  583575 main.go:141] libmachine: (functional-365308) Calling .GetSSHKeyPath
I1002 07:20:39.163195  583575 main.go:141] libmachine: (functional-365308) Calling .GetSSHUsername
I1002 07:20:39.163404  583575 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/functional-365308/id_rsa Username:docker}
I1002 07:20:39.251496  583575 build_images.go:161] Building image from path: /tmp/build.231047458.tar
I1002 07:20:39.251567  583575 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 07:20:39.264387  583575 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.231047458.tar
I1002 07:20:39.269357  583575 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.231047458.tar: stat -c "%s %y" /var/lib/minikube/build/build.231047458.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.231047458.tar': No such file or directory
I1002 07:20:39.269398  583575 ssh_runner.go:362] scp /tmp/build.231047458.tar --> /var/lib/minikube/build/build.231047458.tar (3072 bytes)
I1002 07:20:39.301680  583575 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.231047458
I1002 07:20:39.317163  583575 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.231047458 -xf /var/lib/minikube/build/build.231047458.tar
I1002 07:20:39.331813  583575 crio.go:315] Building image: /var/lib/minikube/build/build.231047458
I1002 07:20:39.331899  583575 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-365308 /var/lib/minikube/build/build.231047458 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1002 07:20:43.800427  583575 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-365308 /var/lib/minikube/build/build.231047458 --cgroup-manager=cgroupfs: (4.46848011s)
I1002 07:20:43.800547  583575 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.231047458
I1002 07:20:43.817122  583575 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.231047458.tar
I1002 07:20:43.829893  583575 build_images.go:217] Built localhost/my-image:functional-365308 from /tmp/build.231047458.tar
I1002 07:20:43.829947  583575 build_images.go:133] succeeded building to: functional-365308
I1002 07:20:43.829954  583575 build_images.go:134] failed building to: 
I1002 07:20:43.829989  583575 main.go:141] libmachine: Making call to close driver server
I1002 07:20:43.830012  583575 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:43.830376  583575 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:43.830396  583575 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 07:20:43.830404  583575 main.go:141] libmachine: Making call to close driver server
I1002 07:20:43.830411  583575 main.go:141] libmachine: (functional-365308) Calling .Close
I1002 07:20:43.830416  583575 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
I1002 07:20:43.830668  583575 main.go:141] libmachine: Successfully made call to close driver server
I1002 07:20:43.830682  583575 main.go:141] libmachine: (functional-365308) DBG | Closing plugin on server side
I1002 07:20:43.830686  583575 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-365308
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image load --daemon kicbase/echo-server:functional-365308 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 image load --daemon kicbase/echo-server:functional-365308 --alsologtostderr: (1.133344568s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image load --daemon kicbase/echo-server:functional-365308 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-365308
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image load --daemon kicbase/echo-server:functional-365308 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image save kicbase/echo-server:functional-365308 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image rm kicbase/echo-server:functional-365308 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-365308
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 image save --daemon kicbase/echo-server:functional-365308 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-365308
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 update-context --alsologtostderr -v=2
E1002 07:20:52.832778  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 service list: (1.267590306s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-365308 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-365308 service list -o json: (1.267861114s)
functional_test.go:1504: Took "1.26794551s" to run "out/minikube-linux-amd64 -p functional-365308 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-365308
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-365308
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-365308
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (219.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 07:30:52.832768  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m38.297111112s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
E1002 07:34:18.194600  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:18.201090  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:18.212526  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:18.234725  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:18.276561  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:18.358005  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:18.519874  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (219.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E1002 07:34:18.841647  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- rollout status deployment/busybox
E1002 07:34:19.483399  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:20.764865  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 kubectl -- rollout status deployment/busybox: (4.150090579s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-v9gjj -- nslookup kubernetes.io
E1002 07:34:23.326517  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-wb5zk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-zfcfg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-v9gjj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-wb5zk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-zfcfg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-v9gjj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-wb5zk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-zfcfg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-v9gjj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-v9gjj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-wb5zk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-wb5zk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-zfcfg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 kubectl -- exec busybox-7b57f96db7-zfcfg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 node add --alsologtostderr -v 5
E1002 07:34:28.448895  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:38.691210  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:59.173453  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 node add --alsologtostderr -v 5: (43.071521988s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-003617 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp testdata/cp-test.txt ha-003617:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4186020223/001/cp-test_ha-003617.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617:/home/docker/cp-test.txt ha-003617-m02:/home/docker/cp-test_ha-003617_ha-003617-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test_ha-003617_ha-003617-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617:/home/docker/cp-test.txt ha-003617-m03:/home/docker/cp-test_ha-003617_ha-003617-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test_ha-003617_ha-003617-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617:/home/docker/cp-test.txt ha-003617-m04:/home/docker/cp-test_ha-003617_ha-003617-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test_ha-003617_ha-003617-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp testdata/cp-test.txt ha-003617-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4186020223/001/cp-test_ha-003617-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m02:/home/docker/cp-test.txt ha-003617:/home/docker/cp-test_ha-003617-m02_ha-003617.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test_ha-003617-m02_ha-003617.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m02:/home/docker/cp-test.txt ha-003617-m03:/home/docker/cp-test_ha-003617-m02_ha-003617-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test_ha-003617-m02_ha-003617-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m02:/home/docker/cp-test.txt ha-003617-m04:/home/docker/cp-test_ha-003617-m02_ha-003617-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test_ha-003617-m02_ha-003617-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp testdata/cp-test.txt ha-003617-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4186020223/001/cp-test_ha-003617-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m03:/home/docker/cp-test.txt ha-003617:/home/docker/cp-test_ha-003617-m03_ha-003617.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test_ha-003617-m03_ha-003617.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m03:/home/docker/cp-test.txt ha-003617-m02:/home/docker/cp-test_ha-003617-m03_ha-003617-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test_ha-003617-m03_ha-003617-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m03:/home/docker/cp-test.txt ha-003617-m04:/home/docker/cp-test_ha-003617-m03_ha-003617-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test_ha-003617-m03_ha-003617-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp testdata/cp-test.txt ha-003617-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4186020223/001/cp-test_ha-003617-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m04:/home/docker/cp-test.txt ha-003617:/home/docker/cp-test_ha-003617-m04_ha-003617.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617 "sudo cat /home/docker/cp-test_ha-003617-m04_ha-003617.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m04:/home/docker/cp-test.txt ha-003617-m02:/home/docker/cp-test_ha-003617-m04_ha-003617-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m02 "sudo cat /home/docker/cp-test_ha-003617-m04_ha-003617-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 cp ha-003617-m04:/home/docker/cp-test.txt ha-003617-m03:/home/docker/cp-test_ha-003617-m04_ha-003617-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 ssh -n ha-003617-m03 "sudo cat /home/docker/cp-test_ha-003617-m04_ha-003617-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (83.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 node stop m02 --alsologtostderr -v 5
E1002 07:35:40.135035  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:35:52.835432  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 node stop m02 --alsologtostderr -v 5: (1m22.921884263s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5: exit status 7 (703.223344ms)

                                                
                                                
-- stdout --
	ha-003617
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003617-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003617-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003617-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:36:48.079232  590958 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:36:48.079453  590958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:36:48.079465  590958 out.go:374] Setting ErrFile to fd 2...
	I1002 07:36:48.079469  590958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:36:48.079741  590958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:36:48.080010  590958 out.go:368] Setting JSON to false
	I1002 07:36:48.080049  590958 mustload.go:65] Loading cluster: ha-003617
	I1002 07:36:48.080100  590958 notify.go:220] Checking for updates...
	I1002 07:36:48.080505  590958 config.go:182] Loaded profile config "ha-003617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:36:48.080525  590958 status.go:174] checking status of ha-003617 ...
	I1002 07:36:48.081010  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.081057  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.101129  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
	I1002 07:36:48.101690  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.102364  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.102399  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.102878  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.103177  590958 main.go:141] libmachine: (ha-003617) Calling .GetState
	I1002 07:36:48.105348  590958 status.go:371] ha-003617 host status = "Running" (err=<nil>)
	I1002 07:36:48.105373  590958 host.go:66] Checking if "ha-003617" exists ...
	I1002 07:36:48.105681  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.105724  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.119941  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1002 07:36:48.120542  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.121125  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.121167  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.121514  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.121714  590958 main.go:141] libmachine: (ha-003617) Calling .GetIP
	I1002 07:36:48.125216  590958 main.go:141] libmachine: (ha-003617) DBG | domain ha-003617 has defined MAC address 52:54:00:01:64:15 in network mk-ha-003617
	I1002 07:36:48.125764  590958 main.go:141] libmachine: (ha-003617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:64:15", ip: ""} in network mk-ha-003617: {Iface:virbr1 ExpiryTime:2025-10-02 08:30:56 +0000 UTC Type:0 Mac:52:54:00:01:64:15 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-003617 Clientid:01:52:54:00:01:64:15}
	I1002 07:36:48.125796  590958 main.go:141] libmachine: (ha-003617) DBG | domain ha-003617 has defined IP address 192.168.39.182 and MAC address 52:54:00:01:64:15 in network mk-ha-003617
	I1002 07:36:48.125983  590958 host.go:66] Checking if "ha-003617" exists ...
	I1002 07:36:48.126404  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.126461  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.141548  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I1002 07:36:48.142032  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.142525  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.142556  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.142900  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.143194  590958 main.go:141] libmachine: (ha-003617) Calling .DriverName
	I1002 07:36:48.143450  590958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:36:48.143477  590958 main.go:141] libmachine: (ha-003617) Calling .GetSSHHostname
	I1002 07:36:48.146964  590958 main.go:141] libmachine: (ha-003617) DBG | domain ha-003617 has defined MAC address 52:54:00:01:64:15 in network mk-ha-003617
	I1002 07:36:48.147653  590958 main.go:141] libmachine: (ha-003617) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:64:15", ip: ""} in network mk-ha-003617: {Iface:virbr1 ExpiryTime:2025-10-02 08:30:56 +0000 UTC Type:0 Mac:52:54:00:01:64:15 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:ha-003617 Clientid:01:52:54:00:01:64:15}
	I1002 07:36:48.147678  590958 main.go:141] libmachine: (ha-003617) DBG | domain ha-003617 has defined IP address 192.168.39.182 and MAC address 52:54:00:01:64:15 in network mk-ha-003617
	I1002 07:36:48.147904  590958 main.go:141] libmachine: (ha-003617) Calling .GetSSHPort
	I1002 07:36:48.148075  590958 main.go:141] libmachine: (ha-003617) Calling .GetSSHKeyPath
	I1002 07:36:48.148262  590958 main.go:141] libmachine: (ha-003617) Calling .GetSSHUsername
	I1002 07:36:48.148459  590958 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/ha-003617/id_rsa Username:docker}
	I1002 07:36:48.233050  590958 ssh_runner.go:195] Run: systemctl --version
	I1002 07:36:48.242093  590958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:36:48.263799  590958 kubeconfig.go:125] found "ha-003617" server: "https://192.168.39.254:8443"
	I1002 07:36:48.263842  590958 api_server.go:166] Checking apiserver status ...
	I1002 07:36:48.263889  590958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:36:48.288618  590958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W1002 07:36:48.302605  590958 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:36:48.302663  590958 ssh_runner.go:195] Run: ls
	I1002 07:36:48.308550  590958 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1002 07:36:48.314438  590958 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1002 07:36:48.314467  590958 status.go:463] ha-003617 apiserver status = Running (err=<nil>)
	I1002 07:36:48.314478  590958 status.go:176] ha-003617 status: &{Name:ha-003617 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:36:48.314497  590958 status.go:174] checking status of ha-003617-m02 ...
	I1002 07:36:48.314794  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.314844  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.329650  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1002 07:36:48.330282  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.330762  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.330781  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.331119  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.331335  590958 main.go:141] libmachine: (ha-003617-m02) Calling .GetState
	I1002 07:36:48.333064  590958 status.go:371] ha-003617-m02 host status = "Stopped" (err=<nil>)
	I1002 07:36:48.333083  590958 status.go:384] host is not running, skipping remaining checks
	I1002 07:36:48.333089  590958 status.go:176] ha-003617-m02 status: &{Name:ha-003617-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:36:48.333111  590958 status.go:174] checking status of ha-003617-m03 ...
	I1002 07:36:48.333441  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.333483  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.347989  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I1002 07:36:48.348468  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.349044  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.349070  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.349506  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.349733  590958 main.go:141] libmachine: (ha-003617-m03) Calling .GetState
	I1002 07:36:48.351873  590958 status.go:371] ha-003617-m03 host status = "Running" (err=<nil>)
	I1002 07:36:48.351901  590958 host.go:66] Checking if "ha-003617-m03" exists ...
	I1002 07:36:48.352290  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.352364  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.366420  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46301
	I1002 07:36:48.366829  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.367305  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.367324  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.367709  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.367920  590958 main.go:141] libmachine: (ha-003617-m03) Calling .GetIP
	I1002 07:36:48.371386  590958 main.go:141] libmachine: (ha-003617-m03) DBG | domain ha-003617-m03 has defined MAC address 52:54:00:ea:df:38 in network mk-ha-003617
	I1002 07:36:48.371933  590958 main.go:141] libmachine: (ha-003617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:df:38", ip: ""} in network mk-ha-003617: {Iface:virbr1 ExpiryTime:2025-10-02 08:33:07 +0000 UTC Type:0 Mac:52:54:00:ea:df:38 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-003617-m03 Clientid:01:52:54:00:ea:df:38}
	I1002 07:36:48.371964  590958 main.go:141] libmachine: (ha-003617-m03) DBG | domain ha-003617-m03 has defined IP address 192.168.39.43 and MAC address 52:54:00:ea:df:38 in network mk-ha-003617
	I1002 07:36:48.372124  590958 host.go:66] Checking if "ha-003617-m03" exists ...
	I1002 07:36:48.372591  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.372644  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.386743  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I1002 07:36:48.387234  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.387748  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.387771  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.388174  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.388386  590958 main.go:141] libmachine: (ha-003617-m03) Calling .DriverName
	I1002 07:36:48.388589  590958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:36:48.388612  590958 main.go:141] libmachine: (ha-003617-m03) Calling .GetSSHHostname
	I1002 07:36:48.391533  590958 main.go:141] libmachine: (ha-003617-m03) DBG | domain ha-003617-m03 has defined MAC address 52:54:00:ea:df:38 in network mk-ha-003617
	I1002 07:36:48.392048  590958 main.go:141] libmachine: (ha-003617-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:df:38", ip: ""} in network mk-ha-003617: {Iface:virbr1 ExpiryTime:2025-10-02 08:33:07 +0000 UTC Type:0 Mac:52:54:00:ea:df:38 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:ha-003617-m03 Clientid:01:52:54:00:ea:df:38}
	I1002 07:36:48.392076  590958 main.go:141] libmachine: (ha-003617-m03) DBG | domain ha-003617-m03 has defined IP address 192.168.39.43 and MAC address 52:54:00:ea:df:38 in network mk-ha-003617
	I1002 07:36:48.392290  590958 main.go:141] libmachine: (ha-003617-m03) Calling .GetSSHPort
	I1002 07:36:48.392461  590958 main.go:141] libmachine: (ha-003617-m03) Calling .GetSSHKeyPath
	I1002 07:36:48.392592  590958 main.go:141] libmachine: (ha-003617-m03) Calling .GetSSHUsername
	I1002 07:36:48.392680  590958 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/ha-003617-m03/id_rsa Username:docker}
	I1002 07:36:48.480997  590958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:36:48.506193  590958 kubeconfig.go:125] found "ha-003617" server: "https://192.168.39.254:8443"
	I1002 07:36:48.506232  590958 api_server.go:166] Checking apiserver status ...
	I1002 07:36:48.506315  590958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:36:48.530946  590958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1785/cgroup
	W1002 07:36:48.543403  590958 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1785/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:36:48.543468  590958 ssh_runner.go:195] Run: ls
	I1002 07:36:48.548718  590958 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1002 07:36:48.553836  590958 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1002 07:36:48.553859  590958 status.go:463] ha-003617-m03 apiserver status = Running (err=<nil>)
	I1002 07:36:48.553877  590958 status.go:176] ha-003617-m03 status: &{Name:ha-003617-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:36:48.553895  590958 status.go:174] checking status of ha-003617-m04 ...
	I1002 07:36:48.554262  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.554307  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.568401  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I1002 07:36:48.568939  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.569511  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.569540  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.569892  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.570130  590958 main.go:141] libmachine: (ha-003617-m04) Calling .GetState
	I1002 07:36:48.571817  590958 status.go:371] ha-003617-m04 host status = "Running" (err=<nil>)
	I1002 07:36:48.571836  590958 host.go:66] Checking if "ha-003617-m04" exists ...
	I1002 07:36:48.572276  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.572319  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.586109  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
	I1002 07:36:48.586550  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.587005  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.587089  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.587476  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.587692  590958 main.go:141] libmachine: (ha-003617-m04) Calling .GetIP
	I1002 07:36:48.590757  590958 main.go:141] libmachine: (ha-003617-m04) DBG | domain ha-003617-m04 has defined MAC address 52:54:00:ff:95:0e in network mk-ha-003617
	I1002 07:36:48.591362  590958 main.go:141] libmachine: (ha-003617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:95:0e", ip: ""} in network mk-ha-003617: {Iface:virbr1 ExpiryTime:2025-10-02 08:34:43 +0000 UTC Type:0 Mac:52:54:00:ff:95:0e Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-003617-m04 Clientid:01:52:54:00:ff:95:0e}
	I1002 07:36:48.591392  590958 main.go:141] libmachine: (ha-003617-m04) DBG | domain ha-003617-m04 has defined IP address 192.168.39.140 and MAC address 52:54:00:ff:95:0e in network mk-ha-003617
	I1002 07:36:48.591548  590958 host.go:66] Checking if "ha-003617-m04" exists ...
	I1002 07:36:48.591920  590958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:36:48.591964  590958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:36:48.607455  590958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34935
	I1002 07:36:48.607969  590958 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:36:48.608543  590958 main.go:141] libmachine: Using API Version  1
	I1002 07:36:48.608572  590958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:36:48.608969  590958 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:36:48.609220  590958 main.go:141] libmachine: (ha-003617-m04) Calling .DriverName
	I1002 07:36:48.609466  590958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:36:48.609488  590958 main.go:141] libmachine: (ha-003617-m04) Calling .GetSSHHostname
	I1002 07:36:48.613181  590958 main.go:141] libmachine: (ha-003617-m04) DBG | domain ha-003617-m04 has defined MAC address 52:54:00:ff:95:0e in network mk-ha-003617
	I1002 07:36:48.613759  590958 main.go:141] libmachine: (ha-003617-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:95:0e", ip: ""} in network mk-ha-003617: {Iface:virbr1 ExpiryTime:2025-10-02 08:34:43 +0000 UTC Type:0 Mac:52:54:00:ff:95:0e Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-003617-m04 Clientid:01:52:54:00:ff:95:0e}
	I1002 07:36:48.613785  590958 main.go:141] libmachine: (ha-003617-m04) DBG | domain ha-003617-m04 has defined IP address 192.168.39.140 and MAC address 52:54:00:ff:95:0e in network mk-ha-003617
	I1002 07:36:48.614165  590958 main.go:141] libmachine: (ha-003617-m04) Calling .GetSSHPort
	I1002 07:36:48.614438  590958 main.go:141] libmachine: (ha-003617-m04) Calling .GetSSHKeyPath
	I1002 07:36:48.614630  590958 main.go:141] libmachine: (ha-003617-m04) Calling .GetSSHUsername
	I1002 07:36:48.614794  590958 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/ha-003617-m04/id_rsa Username:docker}
	I1002 07:36:48.705524  590958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:36:48.729374  590958 status.go:176] ha-003617-m04 status: &{Name:ha-003617-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (83.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 node start m02 --alsologtostderr -v 5
E1002 07:37:02.059404  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 node start m02 --alsologtostderr -v 5: (36.146497546s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5: (1.179198757s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.188140743s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 stop --alsologtostderr -v 5
E1002 07:39:18.198213  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:39:45.903229  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:40:52.832373  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 stop --alsologtostderr -v 5: (4m18.287901394s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 start --wait true --alsologtostderr -v 5: (2m7.078436313s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 node delete m03 --alsologtostderr -v 5
E1002 07:43:55.913363  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 node delete m03 --alsologtostderr -v 5: (18.413806314s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (252.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 stop --alsologtostderr -v 5
E1002 07:44:18.194590  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:45:52.833397  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 stop --alsologtostderr -v 5: (4m12.271924997s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5: exit status 7 (114.854206ms)

                                                
                                                
-- stdout --
	ha-003617
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003617-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003617-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:48:25.759223  594926 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:48:25.759649  594926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:48:25.759657  594926 out.go:374] Setting ErrFile to fd 2...
	I1002 07:48:25.759662  594926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:48:25.759891  594926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:48:25.761176  594926 out.go:368] Setting JSON to false
	I1002 07:48:25.761219  594926 mustload.go:65] Loading cluster: ha-003617
	I1002 07:48:25.761335  594926 notify.go:220] Checking for updates...
	I1002 07:48:25.761697  594926 config.go:182] Loaded profile config "ha-003617": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:48:25.761716  594926 status.go:174] checking status of ha-003617 ...
	I1002 07:48:25.762211  594926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:48:25.762296  594926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:48:25.783517  594926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
	I1002 07:48:25.784105  594926 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:48:25.784692  594926 main.go:141] libmachine: Using API Version  1
	I1002 07:48:25.784714  594926 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:48:25.785228  594926 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:48:25.785492  594926 main.go:141] libmachine: (ha-003617) Calling .GetState
	I1002 07:48:25.787487  594926 status.go:371] ha-003617 host status = "Stopped" (err=<nil>)
	I1002 07:48:25.787509  594926 status.go:384] host is not running, skipping remaining checks
	I1002 07:48:25.787516  594926 status.go:176] ha-003617 status: &{Name:ha-003617 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:48:25.787558  594926 status.go:174] checking status of ha-003617-m02 ...
	I1002 07:48:25.787904  594926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:48:25.787964  594926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:48:25.801596  594926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43363
	I1002 07:48:25.802009  594926 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:48:25.802465  594926 main.go:141] libmachine: Using API Version  1
	I1002 07:48:25.802491  594926 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:48:25.802838  594926 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:48:25.803073  594926 main.go:141] libmachine: (ha-003617-m02) Calling .GetState
	I1002 07:48:25.805227  594926 status.go:371] ha-003617-m02 host status = "Stopped" (err=<nil>)
	I1002 07:48:25.805242  594926 status.go:384] host is not running, skipping remaining checks
	I1002 07:48:25.805247  594926 status.go:176] ha-003617-m02 status: &{Name:ha-003617-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:48:25.805268  594926 status.go:174] checking status of ha-003617-m04 ...
	I1002 07:48:25.805550  594926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:48:25.805592  594926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:48:25.819659  594926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44779
	I1002 07:48:25.820277  594926 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:48:25.820760  594926 main.go:141] libmachine: Using API Version  1
	I1002 07:48:25.820782  594926 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:48:25.821132  594926 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:48:25.821326  594926 main.go:141] libmachine: (ha-003617-m04) Calling .GetState
	I1002 07:48:25.823492  594926 status.go:371] ha-003617-m04 host status = "Stopped" (err=<nil>)
	I1002 07:48:25.823507  594926 status.go:384] host is not running, skipping remaining checks
	I1002 07:48:25.823515  594926 status.go:176] ha-003617-m04 status: &{Name:ha-003617-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (252.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (110.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 07:49:18.197092  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m49.554806336s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (110.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 node add --control-plane --alsologtostderr -v 5
E1002 07:50:41.265450  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:50:52.836341  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-003617 node add --control-plane --alsologtostderr -v 5: (1m21.850988162s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-003617 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-635164 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-635164 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.80721792s)
--- PASS: TestJSONOutput/start/Command (82.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-635164 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-635164 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-635164 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-635164 --output=json --user=testUser: (7.000155321s)
--- PASS: TestJSONOutput/stop/Command (7.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-234942 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-234942 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (70.269938ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"21d01d9b-d61d-4047-8e91-53944303dfb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-234942] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c4e7286-4a1b-464c-8624-1149495207d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"44f8b1a8-36f7-48e7-9af3-c7b2338b0d2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"47d20886-7874-4894-af1d-6e5ca62190c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig"}}
	{"specversion":"1.0","id":"f659469a-b4c8-4e9d-9f4b-40b1ce0f654e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube"}}
	{"specversion":"1.0","id":"79f859d5-90cb-4bcd-97da-a474c5d52c0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"79614815-e301-4295-b1b8-6fdcc0cf70e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"699054ef-821e-48d2-8aa3-6621983717dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-234942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-234942
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (87.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-423174 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-423174 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.364064707s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-436617 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 07:54:18.198584  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-436617 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.632203552s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-423174
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-436617
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-436617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-436617
helpers_test.go:175: Cleaning up "first-423174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-423174
--- PASS: TestMinikubeProfile (87.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-393319 --memory=3072 --mount-string /tmp/TestMountStartserial3068602769/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-393319 --memory=3072 --mount-string /tmp/TestMountStartserial3068602769/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.434534218s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-393319 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-393319 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-412029 --memory=3072 --mount-string /tmp/TestMountStartserial3068602769/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-412029 --memory=3072 --mount-string /tmp/TestMountStartserial3068602769/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.532863692s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-412029 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-412029 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-393319 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-412029 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-412029 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-412029
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-412029: (1.340194293s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-412029
E1002 07:55:52.835854  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-412029: (19.1361392s)
--- PASS: TestMountStart/serial/RestartStopped (20.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-412029 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-412029 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465973 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465973 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m15.875642412s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-465973 -- rollout status deployment/busybox: (4.971355873s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-8rbnz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-rjjw5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-8rbnz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-rjjw5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-8rbnz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-rjjw5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-8rbnz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-8rbnz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-rjjw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465973 -- exec busybox-7b57f96db7-rjjw5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-465973 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-465973 -v=5 --alsologtostderr: (43.507855308s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-465973 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp testdata/cp-test.txt multinode-465973:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1663067384/001/cp-test_multinode-465973.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973:/home/docker/cp-test.txt multinode-465973-m02:/home/docker/cp-test_multinode-465973_multinode-465973-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m02 "sudo cat /home/docker/cp-test_multinode-465973_multinode-465973-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973:/home/docker/cp-test.txt multinode-465973-m03:/home/docker/cp-test_multinode-465973_multinode-465973-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m03 "sudo cat /home/docker/cp-test_multinode-465973_multinode-465973-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp testdata/cp-test.txt multinode-465973-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1663067384/001/cp-test_multinode-465973-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973-m02:/home/docker/cp-test.txt multinode-465973:/home/docker/cp-test_multinode-465973-m02_multinode-465973.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973 "sudo cat /home/docker/cp-test_multinode-465973-m02_multinode-465973.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973-m02:/home/docker/cp-test.txt multinode-465973-m03:/home/docker/cp-test_multinode-465973-m02_multinode-465973-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m03 "sudo cat /home/docker/cp-test_multinode-465973-m02_multinode-465973-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp testdata/cp-test.txt multinode-465973-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1663067384/001/cp-test_multinode-465973-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973-m03:/home/docker/cp-test.txt multinode-465973:/home/docker/cp-test_multinode-465973-m03_multinode-465973.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973 "sudo cat /home/docker/cp-test_multinode-465973-m03_multinode-465973.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 cp multinode-465973-m03:/home/docker/cp-test.txt multinode-465973-m02:/home/docker/cp-test_multinode-465973-m03_multinode-465973-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 ssh -n multinode-465973-m02 "sudo cat /home/docker/cp-test_multinode-465973-m03_multinode-465973-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-465973 node stop m03: (1.629513168s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465973 status: exit status 7 (462.00498ms)

                                                
                                                
-- stdout --
	multinode-465973
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-465973-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-465973-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr: exit status 7 (469.318248ms)

                                                
                                                
-- stdout --
	multinode-465973
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-465973-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-465973-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:59:13.630344  602840 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:59:13.630655  602840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:59:13.630669  602840 out.go:374] Setting ErrFile to fd 2...
	I1002 07:59:13.630675  602840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:59:13.630922  602840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 07:59:13.631128  602840 out.go:368] Setting JSON to false
	I1002 07:59:13.631180  602840 mustload.go:65] Loading cluster: multinode-465973
	I1002 07:59:13.631217  602840 notify.go:220] Checking for updates...
	I1002 07:59:13.631604  602840 config.go:182] Loaded profile config "multinode-465973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:59:13.631624  602840 status.go:174] checking status of multinode-465973 ...
	I1002 07:59:13.632236  602840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:59:13.632306  602840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:59:13.651194  602840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43473
	I1002 07:59:13.651765  602840 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:59:13.652339  602840 main.go:141] libmachine: Using API Version  1
	I1002 07:59:13.652366  602840 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:59:13.652846  602840 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:59:13.653076  602840 main.go:141] libmachine: (multinode-465973) Calling .GetState
	I1002 07:59:13.655202  602840 status.go:371] multinode-465973 host status = "Running" (err=<nil>)
	I1002 07:59:13.655221  602840 host.go:66] Checking if "multinode-465973" exists ...
	I1002 07:59:13.655524  602840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:59:13.655575  602840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:59:13.669956  602840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42821
	I1002 07:59:13.670430  602840 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:59:13.670902  602840 main.go:141] libmachine: Using API Version  1
	I1002 07:59:13.670925  602840 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:59:13.671288  602840 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:59:13.671492  602840 main.go:141] libmachine: (multinode-465973) Calling .GetIP
	I1002 07:59:13.674452  602840 main.go:141] libmachine: (multinode-465973) DBG | domain multinode-465973 has defined MAC address 52:54:00:54:bf:ab in network mk-multinode-465973
	I1002 07:59:13.674925  602840 main.go:141] libmachine: (multinode-465973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:ab", ip: ""} in network mk-multinode-465973: {Iface:virbr1 ExpiryTime:2025-10-02 08:56:11 +0000 UTC Type:0 Mac:52:54:00:54:bf:ab Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-465973 Clientid:01:52:54:00:54:bf:ab}
	I1002 07:59:13.674976  602840 main.go:141] libmachine: (multinode-465973) DBG | domain multinode-465973 has defined IP address 192.168.39.24 and MAC address 52:54:00:54:bf:ab in network mk-multinode-465973
	I1002 07:59:13.675089  602840 host.go:66] Checking if "multinode-465973" exists ...
	I1002 07:59:13.675510  602840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:59:13.675568  602840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:59:13.690762  602840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36931
	I1002 07:59:13.691251  602840 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:59:13.691783  602840 main.go:141] libmachine: Using API Version  1
	I1002 07:59:13.691810  602840 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:59:13.692220  602840 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:59:13.692429  602840 main.go:141] libmachine: (multinode-465973) Calling .DriverName
	I1002 07:59:13.692622  602840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:59:13.692646  602840 main.go:141] libmachine: (multinode-465973) Calling .GetSSHHostname
	I1002 07:59:13.695889  602840 main.go:141] libmachine: (multinode-465973) DBG | domain multinode-465973 has defined MAC address 52:54:00:54:bf:ab in network mk-multinode-465973
	I1002 07:59:13.696346  602840 main.go:141] libmachine: (multinode-465973) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:ab", ip: ""} in network mk-multinode-465973: {Iface:virbr1 ExpiryTime:2025-10-02 08:56:11 +0000 UTC Type:0 Mac:52:54:00:54:bf:ab Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-465973 Clientid:01:52:54:00:54:bf:ab}
	I1002 07:59:13.696376  602840 main.go:141] libmachine: (multinode-465973) DBG | domain multinode-465973 has defined IP address 192.168.39.24 and MAC address 52:54:00:54:bf:ab in network mk-multinode-465973
	I1002 07:59:13.696537  602840 main.go:141] libmachine: (multinode-465973) Calling .GetSSHPort
	I1002 07:59:13.696715  602840 main.go:141] libmachine: (multinode-465973) Calling .GetSSHKeyPath
	I1002 07:59:13.696866  602840 main.go:141] libmachine: (multinode-465973) Calling .GetSSHUsername
	I1002 07:59:13.697028  602840 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/multinode-465973/id_rsa Username:docker}
	I1002 07:59:13.780150  602840 ssh_runner.go:195] Run: systemctl --version
	I1002 07:59:13.787299  602840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:59:13.810390  602840 kubeconfig.go:125] found "multinode-465973" server: "https://192.168.39.24:8443"
	I1002 07:59:13.810436  602840 api_server.go:166] Checking apiserver status ...
	I1002 07:59:13.810474  602840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:59:13.833981  602840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	W1002 07:59:13.848944  602840 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:59:13.849017  602840 ssh_runner.go:195] Run: ls
	I1002 07:59:13.855065  602840 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1002 07:59:13.860525  602840 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1002 07:59:13.860563  602840 status.go:463] multinode-465973 apiserver status = Running (err=<nil>)
	I1002 07:59:13.860578  602840 status.go:176] multinode-465973 status: &{Name:multinode-465973 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:59:13.860601  602840 status.go:174] checking status of multinode-465973-m02 ...
	I1002 07:59:13.860942  602840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:59:13.860992  602840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:59:13.876763  602840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1002 07:59:13.877276  602840 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:59:13.877761  602840 main.go:141] libmachine: Using API Version  1
	I1002 07:59:13.877786  602840 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:59:13.878181  602840 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:59:13.878435  602840 main.go:141] libmachine: (multinode-465973-m02) Calling .GetState
	I1002 07:59:13.880440  602840 status.go:371] multinode-465973-m02 host status = "Running" (err=<nil>)
	I1002 07:59:13.880461  602840 host.go:66] Checking if "multinode-465973-m02" exists ...
	I1002 07:59:13.880760  602840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:59:13.880800  602840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:59:13.896577  602840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
	I1002 07:59:13.897054  602840 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:59:13.897562  602840 main.go:141] libmachine: Using API Version  1
	I1002 07:59:13.897591  602840 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:59:13.897928  602840 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:59:13.898205  602840 main.go:141] libmachine: (multinode-465973-m02) Calling .GetIP
	I1002 07:59:13.901714  602840 main.go:141] libmachine: (multinode-465973-m02) DBG | domain multinode-465973-m02 has defined MAC address 52:54:00:82:e6:61 in network mk-multinode-465973
	I1002 07:59:13.902156  602840 main.go:141] libmachine: (multinode-465973-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:e6:61", ip: ""} in network mk-multinode-465973: {Iface:virbr1 ExpiryTime:2025-10-02 08:57:41 +0000 UTC Type:0 Mac:52:54:00:82:e6:61 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-465973-m02 Clientid:01:52:54:00:82:e6:61}
	I1002 07:59:13.902186  602840 main.go:141] libmachine: (multinode-465973-m02) DBG | domain multinode-465973-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:82:e6:61 in network mk-multinode-465973
	I1002 07:59:13.902391  602840 host.go:66] Checking if "multinode-465973-m02" exists ...
	I1002 07:59:13.902805  602840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:59:13.902855  602840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:59:13.917683  602840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1002 07:59:13.918269  602840 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:59:13.918804  602840 main.go:141] libmachine: Using API Version  1
	I1002 07:59:13.918820  602840 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:59:13.919132  602840 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:59:13.919377  602840 main.go:141] libmachine: (multinode-465973-m02) Calling .DriverName
	I1002 07:59:13.919589  602840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:59:13.919621  602840 main.go:141] libmachine: (multinode-465973-m02) Calling .GetSSHHostname
	I1002 07:59:13.922923  602840 main.go:141] libmachine: (multinode-465973-m02) DBG | domain multinode-465973-m02 has defined MAC address 52:54:00:82:e6:61 in network mk-multinode-465973
	I1002 07:59:13.923492  602840 main.go:141] libmachine: (multinode-465973-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:e6:61", ip: ""} in network mk-multinode-465973: {Iface:virbr1 ExpiryTime:2025-10-02 08:57:41 +0000 UTC Type:0 Mac:52:54:00:82:e6:61 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-465973-m02 Clientid:01:52:54:00:82:e6:61}
	I1002 07:59:13.923521  602840 main.go:141] libmachine: (multinode-465973-m02) DBG | domain multinode-465973-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:82:e6:61 in network mk-multinode-465973
	I1002 07:59:13.923717  602840 main.go:141] libmachine: (multinode-465973-m02) Calling .GetSSHPort
	I1002 07:59:13.923879  602840 main.go:141] libmachine: (multinode-465973-m02) Calling .GetSSHKeyPath
	I1002 07:59:13.924040  602840 main.go:141] libmachine: (multinode-465973-m02) Calling .GetSSHUsername
	I1002 07:59:13.924207  602840 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21643-562157/.minikube/machines/multinode-465973-m02/id_rsa Username:docker}
	I1002 07:59:14.010898  602840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:59:14.029822  602840 status.go:176] multinode-465973-m02 status: &{Name:multinode-465973-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:59:14.029858  602840 status.go:174] checking status of multinode-465973-m03 ...
	I1002 07:59:14.030195  602840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 07:59:14.030245  602840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 07:59:14.044773  602840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45551
	I1002 07:59:14.045237  602840 main.go:141] libmachine: () Calling .GetVersion
	I1002 07:59:14.045801  602840 main.go:141] libmachine: Using API Version  1
	I1002 07:59:14.045823  602840 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 07:59:14.046202  602840 main.go:141] libmachine: () Calling .GetMachineName
	I1002 07:59:14.046435  602840 main.go:141] libmachine: (multinode-465973-m03) Calling .GetState
	I1002 07:59:14.048271  602840 status.go:371] multinode-465973-m03 host status = "Stopped" (err=<nil>)
	I1002 07:59:14.048288  602840 status.go:384] host is not running, skipping remaining checks
	I1002 07:59:14.048295  602840 status.go:176] multinode-465973-m03 status: &{Name:multinode-465973-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 node start m03 -v=5 --alsologtostderr
E1002 07:59:18.194311  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-465973 node start m03 -v=5 --alsologtostderr: (37.959436976s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (285.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-465973
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-465973
E1002 08:00:35.917813  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:00:52.836715  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-465973: (2m38.479767264s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465973 --wait=true -v=5 --alsologtostderr
E1002 08:04:18.194522  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465973 --wait=true -v=5 --alsologtostderr: (2m7.389287749s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-465973
--- PASS: TestMultiNode/serial/RestartKeepsNodes (285.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-465973 node delete m03: (2.28717922s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (162.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 stop
E1002 08:05:52.832830  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:21.269426  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-465973 stop: (2m42.574780551s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465973 status: exit status 7 (95.618053ms)

                                                
                                                
-- stdout --
	multinode-465973
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-465973-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr: exit status 7 (85.679928ms)

                                                
                                                
-- stdout --
	multinode-465973
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-465973-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:07:24.245478  605575 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:07:24.245786  605575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:07:24.245798  605575 out.go:374] Setting ErrFile to fd 2...
	I1002 08:07:24.245802  605575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:07:24.246020  605575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 08:07:24.246232  605575 out.go:368] Setting JSON to false
	I1002 08:07:24.246265  605575 mustload.go:65] Loading cluster: multinode-465973
	I1002 08:07:24.246418  605575 notify.go:220] Checking for updates...
	I1002 08:07:24.246648  605575 config.go:182] Loaded profile config "multinode-465973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:07:24.246662  605575 status.go:174] checking status of multinode-465973 ...
	I1002 08:07:24.247102  605575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:07:24.247153  605575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:07:24.261038  605575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36073
	I1002 08:07:24.261550  605575 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:07:24.262163  605575 main.go:141] libmachine: Using API Version  1
	I1002 08:07:24.262191  605575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:07:24.262588  605575 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:07:24.262813  605575 main.go:141] libmachine: (multinode-465973) Calling .GetState
	I1002 08:07:24.264527  605575 status.go:371] multinode-465973 host status = "Stopped" (err=<nil>)
	I1002 08:07:24.264541  605575 status.go:384] host is not running, skipping remaining checks
	I1002 08:07:24.264546  605575 status.go:176] multinode-465973 status: &{Name:multinode-465973 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 08:07:24.264564  605575 status.go:174] checking status of multinode-465973-m02 ...
	I1002 08:07:24.264928  605575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 08:07:24.264971  605575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 08:07:24.279387  605575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I1002 08:07:24.279825  605575 main.go:141] libmachine: () Calling .GetVersion
	I1002 08:07:24.280287  605575 main.go:141] libmachine: Using API Version  1
	I1002 08:07:24.280313  605575 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 08:07:24.280635  605575 main.go:141] libmachine: () Calling .GetMachineName
	I1002 08:07:24.280866  605575 main.go:141] libmachine: (multinode-465973-m02) Calling .GetState
	I1002 08:07:24.282797  605575 status.go:371] multinode-465973-m02 host status = "Stopped" (err=<nil>)
	I1002 08:07:24.282811  605575 status.go:384] host is not running, skipping remaining checks
	I1002 08:07:24.282828  605575 status.go:176] multinode-465973-m02 status: &{Name:multinode-465973-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (162.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465973 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465973 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.775480511s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465973 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-465973
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465973-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-465973-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (68.964023ms)

                                                
                                                
-- stdout --
	* [multinode-465973-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-465973-m02' is duplicated with machine name 'multinode-465973-m02' in profile 'multinode-465973'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465973-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 08:09:18.197225  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465973-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.925482252s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-465973
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-465973: exit status 80 (243.120305ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-465973 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-465973-m03 already exists in multinode-465973-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-465973-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.15s)

                                                
                                    
x
+
TestScheduledStopUnix (113.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-223928 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-223928 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.05059454s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-223928 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-223928 -n scheduled-stop-223928
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-223928 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 08:13:13.464717  566080 retry.go:31] will retry after 117.885µs: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.465908  566080 retry.go:31] will retry after 103.036µs: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.467057  566080 retry.go:31] will retry after 240.372µs: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.468203  566080 retry.go:31] will retry after 439.937µs: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.469363  566080 retry.go:31] will retry after 363.337µs: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.470511  566080 retry.go:31] will retry after 896.03µs: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.471646  566080 retry.go:31] will retry after 1.260284ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.473843  566080 retry.go:31] will retry after 1.96512ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.476068  566080 retry.go:31] will retry after 2.972489ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.479267  566080 retry.go:31] will retry after 2.239992ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.482484  566080 retry.go:31] will retry after 5.591119ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.488742  566080 retry.go:31] will retry after 5.429603ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.495021  566080 retry.go:31] will retry after 11.939265ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.507306  566080 retry.go:31] will retry after 20.824375ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.528617  566080 retry.go:31] will retry after 21.071709ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
I1002 08:13:13.549872  566080 retry.go:31] will retry after 56.594926ms: open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/scheduled-stop-223928/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-223928 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-223928 -n scheduled-stop-223928
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-223928
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-223928 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 08:14:18.199436  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-223928
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-223928: exit status 7 (70.24714ms)

                                                
                                                
-- stdout --
	scheduled-stop-223928
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-223928 -n scheduled-stop-223928
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-223928 -n scheduled-stop-223928: exit status 7 (68.271745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-223928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-223928
--- PASS: TestScheduledStopUnix (113.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (161.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1777381785 start -p running-upgrade-636120 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1777381785 start -p running-upgrade-636120 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m46.030655947s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-636120 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-636120 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.803357003s)
helpers_test.go:175: Cleaning up "running-upgrade-636120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-636120
--- PASS: TestRunningBinaryUpgrade (161.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (179.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.017985335s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-814962
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-814962: (2.198671099s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-814962 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-814962 status --format={{.Host}}: exit status 7 (88.745984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.008461384s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-814962 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (101.718174ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-814962] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-814962
	    minikube start -p kubernetes-upgrade-814962 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8149622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-814962 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814962 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.628587168s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-814962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-814962
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-814962: (2.23600998s)
--- PASS: TestKubernetesUpgrade (179.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629707 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-629707 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (80.425804ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-629707] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629707 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629707 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.191516908s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-629707 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.52s)

                                                
                                    
x
+
TestPause/serial/Start (100.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-227088 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 08:15:52.832535  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-227088 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m40.941074825s)
--- PASS: TestPause/serial/Start (100.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629707 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629707 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (24.883987751s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-629707 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-629707 status -o json: exit status 2 (271.297799ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-629707","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-629707
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629707 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629707 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.273057761s)
--- PASS: TestNoKubernetes/serial/Start (50.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-693693 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-693693 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (129.066258ms)

                                                
                                                
-- stdout --
	* [false-693693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:17:09.707155  612471 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:17:09.707434  612471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:17:09.707444  612471 out.go:374] Setting ErrFile to fd 2...
	I1002 08:17:09.707449  612471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:17:09.707671  612471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-562157/.minikube/bin
	I1002 08:17:09.708178  612471 out.go:368] Setting JSON to false
	I1002 08:17:09.709291  612471 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":53980,"bootTime":1759339050,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 08:17:09.709387  612471 start.go:140] virtualization: kvm guest
	I1002 08:17:09.711632  612471 out.go:179] * [false-693693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 08:17:09.713344  612471 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:17:09.713348  612471 notify.go:220] Checking for updates...
	I1002 08:17:09.716040  612471 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:17:09.717331  612471 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-562157/kubeconfig
	I1002 08:17:09.721752  612471 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-562157/.minikube
	I1002 08:17:09.723286  612471 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 08:17:09.724735  612471 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:17:09.726781  612471 config.go:182] Loaded profile config "NoKubernetes-629707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1002 08:17:09.726964  612471 config.go:182] Loaded profile config "kubernetes-upgrade-814962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:17:09.727196  612471 config.go:182] Loaded profile config "pause-227088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:17:09.727341  612471 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:17:09.769754  612471 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 08:17:09.771174  612471 start.go:304] selected driver: kvm2
	I1002 08:17:09.771200  612471 start.go:924] validating driver "kvm2" against <nil>
	I1002 08:17:09.771215  612471 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:17:09.773706  612471 out.go:203] 
	W1002 08:17:09.775725  612471 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 08:17:09.777212  612471 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-693693 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-693693" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 08:16:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.235:8443
name: pause-227088
contexts:
- context:
cluster: pause-227088
extensions:
- extension:
last-update: Thu, 02 Oct 2025 08:16:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-227088
name: pause-227088
current-context: ""
kind: Config
users:
- name: pause-227088
user:
client-certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/pause-227088/client.crt
client-key: /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/pause-227088/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-693693

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-693693"

                                                
                                                
----------------------- debugLogs end: false-693693 [took: 3.27259329s] --------------------------------
helpers_test.go:175: Cleaning up "false-693693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-693693
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-629707 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-629707 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.489109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (2.385603911s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.65086687s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-227088 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 08:17:15.919454  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-227088 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.032714254s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-629707
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-629707: (1.408885238s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.65731154 start -p stopped-upgrade-318800 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.65731154 start -p stopped-upgrade-318800 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.974245958s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.65731154 -p stopped-upgrade-318800 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.65731154 -p stopped-upgrade-318800 stop: (1.689130194s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-318800 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-318800 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.529166248s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (52.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-629707 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-629707 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (52.603267521s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (52.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-227088 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-227088 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-227088 --output=json --layout=cluster: exit status 2 (311.406464ms)

                                                
                                                
-- stdout --
	{"Name":"pause-227088","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-227088","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-227088 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-227088 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-227088 --alsologtostderr -v=5: (1.040576792s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-227088 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (7.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (7.31122769s)
--- PASS: TestPause/serial/VerifyDeletedResources (7.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-629707 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-629707 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.489711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (112.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-208197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-208197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m52.397509409s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (112.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-318800
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (107.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-524839 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-524839 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m47.191378766s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (107.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-048759 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1002 08:20:52.832547  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-048759 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m15.628079164s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-208197 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b74ff184-d7c0-4380-a208-10ec1985c63e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b74ff184-d7c0-4380-a208-10ec1985c63e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004018053s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-208197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-048759 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [429cffb5-6732-4008-9474-90f596c624c0] Pending
helpers_test.go:352: "busybox" [429cffb5-6732-4008-9474-90f596c624c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [429cffb5-6732-4008-9474-90f596c624c0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004945822s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-048759 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-524839 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d91e6f82-24e0-403d-b4f7-28f778b94fe7] Pending
helpers_test.go:352: "busybox" [d91e6f82-24e0-403d-b4f7-28f778b94fe7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d91e6f82-24e0-403d-b4f7-28f778b94fe7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005064609s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-524839 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-208197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-208197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.162189911s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-208197 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-208197 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-208197 --alsologtostderr -v=3: (1m25.638329137s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-048759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-048759 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-048759 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-048759 --alsologtostderr -v=3: (1m28.076047526s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-524839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-524839 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-524839 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-524839 --alsologtostderr -v=3: (1m27.644522568s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-698193 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-698193 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (46.234450503s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-208197 -n old-k8s-version-208197
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-208197 -n old-k8s-version-208197: exit status 7 (77.975911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-208197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-208197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-208197 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (56.598885938s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-208197 -n old-k8s-version-208197
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759: exit status 7 (77.749272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-048759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (71.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-048759 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-048759 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m11.020809088s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (71.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-524839 -n embed-certs-524839
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-524839 -n embed-certs-524839: exit status 7 (79.357567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-524839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (83.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-524839 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-524839 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m23.278988527s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-524839 -n embed-certs-524839
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (83.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-698193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-698193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.504552006s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-698193 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-698193 --alsologtostderr -v=3: (8.490036534s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-698193 -n newest-cni-698193
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-698193 -n newest-cni-698193: exit status 7 (90.767217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-698193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (58.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-698193 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-698193 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (57.6864214s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-698193 -n newest-cni-698193
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (58.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5b6px" [12b555df-60ae-43e7-842f-3c6d29a7382d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5b6px" [12b555df-60ae-43e7-842f-3c6d29a7382d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00598488s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5b6px" [12b555df-60ae-43e7-842f-3c6d29a7382d] Running
E1002 08:24:01.271578  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00515408s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-208197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-208197 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-208197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-208197 --alsologtostderr -v=1: (1.448704724s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-208197 -n old-k8s-version-208197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-208197 -n old-k8s-version-208197: exit status 2 (334.094838ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-208197 -n old-k8s-version-208197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-208197 -n old-k8s-version-208197: exit status 2 (359.367571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-208197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-208197 --alsologtostderr -v=1: (1.179951846s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-208197 -n old-k8s-version-208197
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-208197 -n old-k8s-version-208197
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7cdh4" [a766f18e-c4cf-4179-935b-94660823a0a3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7cdh4" [a766f18e-c4cf-4179-935b-94660823a0a3] Running
E1002 08:24:18.194923  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/functional-365308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005616768s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (106.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-256460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-256460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m46.777822326s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (106.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7cdh4" [a766f18e-c4cf-4179-935b-94660823a0a3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005468913s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-048759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-488rh" [9547c8f7-30d3-448f-848a-e873106bbf3a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-488rh" [9547c8f7-30d3-448f-848a-e873106bbf3a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.009555714s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-048759 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-048759 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-048759 --alsologtostderr -v=1: (1.246438498s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759: exit status 2 (356.233344ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759: exit status 2 (341.393393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-048759 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-048759 --alsologtostderr -v=1: (1.183624194s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-048759 -n default-k8s-diff-port-048759
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.116613664s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-488rh" [9547c8f7-30d3-448f-848a-e873106bbf3a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004534304s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-524839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-698193 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-698193 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-698193 --alsologtostderr -v=1: (1.174808554s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-698193 -n newest-cni-698193
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-698193 -n newest-cni-698193: exit status 2 (382.598153ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-698193 -n newest-cni-698193
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-698193 -n newest-cni-698193: exit status 2 (394.260453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-698193 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-698193 --alsologtostderr -v=1: (1.349966655s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-698193 -n newest-cni-698193
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-698193 -n newest-cni-698193
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-524839 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-524839 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-524839 --alsologtostderr -v=1: (1.120167192s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-524839 -n embed-certs-524839
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-524839 -n embed-certs-524839: exit status 2 (305.50714ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-524839 -n embed-certs-524839
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-524839 -n embed-certs-524839: exit status 2 (300.157333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-524839 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-524839 -n embed-certs-524839
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-524839 -n embed-certs-524839
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.207379064s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (120.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 08:25:52.833066  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/addons-535714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m0.469409495s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (120.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-256460 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d685ec28-e276-4a90-9cdf-0b50a5d42d6c] Pending
helpers_test.go:352: "busybox" [d685ec28-e276-4a90-9cdf-0b50a5d42d6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d685ec28-e276-4a90-9cdf-0b50a5d42d6c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005239509s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-256460 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-pmdzs" [4a1cf600-8d2b-4b41-8eab-10ddf66ae544] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005043127s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-693693 "pgrep -a kubelet"
I1002 08:25:59.449694  566080 config.go:182] Loaded profile config "auto-693693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-693693 replace --force -f testdata/netcat-deployment.yaml
I1002 08:25:59.762107  566080 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7stbz" [68158e8b-677a-49a0-b913-9ddf0dc11868] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7stbz" [68158e8b-677a-49a0-b913-9ddf0dc11868] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.007462592s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-693693 "pgrep -a kubelet"
I1002 08:26:04.699579  566080 config.go:182] Loaded profile config "kindnet-693693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-693693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k87wf" [61742b77-a67a-440c-887a-6278c94137aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k87wf" [61742b77-a67a-440c-887a-6278c94137aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004339432s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-256460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-256460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.305553691s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-256460 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-256460 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-256460 --alsologtostderr -v=3: (1m29.880267894s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-693693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-693693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.813396274s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 08:26:35.585343  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/default-k8s-diff-port-048759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m32.411486245s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-693693 "pgrep -a kubelet"
I1002 08:26:45.423368  566080 config.go:182] Loaded profile config "enable-default-cni-693693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-693693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nqwfg" [40d101d3-2b49-402f-a8a5-a4019c52d002] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nqwfg" [40d101d3-2b49-402f-a8a5-a4019c52d002] Running
E1002 08:26:52.880012  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/old-k8s-version-208197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.353668044s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-693693 exec deployment/netcat -- nslookup kubernetes.default
E1002 08:26:56.067487  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/default-k8s-diff-port-048759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 08:27:33.842244  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/old-k8s-version-208197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m33.104091754s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-256460 -n no-preload-256460
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-256460 -n no-preload-256460: exit status 7 (92.224195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-256460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-s6xvc" [7de596f5-c9d5-403b-af6f-c406dcf7c0cc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005856161s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (66.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-256460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1002 08:27:37.029335  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/default-k8s-diff-port-048759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-256460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m5.636373842s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-256460 -n no-preload-256460
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (66.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-693693 "pgrep -a kubelet"
I1002 08:27:42.955891  566080 config.go:182] Loaded profile config "flannel-693693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-693693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-693693 replace --force -f testdata/netcat-deployment.yaml: (1.201552935s)
I1002 08:27:44.188902  566080 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1002 08:27:44.207716  566080 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7d5jg" [f268906a-8324-4273-9f02-bc25dc1e7303] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7d5jg" [f268906a-8324-4273-9f02-bc25dc1e7303] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005735846s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-693693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gdjx5" [5ae57500-cf86-4e48-a775-de1227ed15f5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-gdjx5" [5ae57500-cf86-4e48-a775-de1227ed15f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006501352s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-693693 "pgrep -a kubelet"
I1002 08:28:11.189820  566080 config.go:182] Loaded profile config "calico-693693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-693693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-57jmn" [df432a6a-e0b1-472a-b6bc-84df06a91848] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-57jmn" [df432a6a-e0b1-472a-b6bc-84df06a91848] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006567915s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-693693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.604259688s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-693693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r6x68" [fcb62e5b-5365-412e-9a64-e6420b64bdcf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r6x68" [fcb62e5b-5365-412e-9a64-e6420b64bdcf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.121089365s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-693693 "pgrep -a kubelet"
I1002 08:28:46.297089  566080 config.go:182] Loaded profile config "bridge-693693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-693693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x8wv9" [4ff33273-4f92-4ab5-ac05-14a2a8988021] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x8wv9" [4ff33273-4f92-4ab5-ac05-14a2a8988021] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005022099s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r6x68" [fcb62e5b-5365-412e-9a64-e6420b64bdcf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003726953s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-256460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-256460 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-256460 --alsologtostderr -v=1
E1002 08:28:55.764604  566080 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/old-k8s-version-208197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-256460 -n no-preload-256460
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-256460 -n no-preload-256460: exit status 2 (292.555093ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-256460 -n no-preload-256460
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-256460 -n no-preload-256460: exit status 2 (295.372982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-256460 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-256460 -n no-preload-256460
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-256460 -n no-preload-256460
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-693693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-693693 "pgrep -a kubelet"
I1002 08:29:25.965287  566080 config.go:182] Loaded profile config "custom-flannel-693693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-693693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zjcs6" [652954bc-6ab3-48e7-904c-379efae9a5e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zjcs6" [652954bc-6ab3-48e7-904c-379efae9a5e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004519563s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-693693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-693693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    

Test skip (40/330)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
262 TestStartStop/group/disable-driver-mounts 0.15
270 TestNetworkPlugins/group/kubenet 3.28
280 TestNetworkPlugins/group/cilium 3.65
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-535714 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-650723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-650723
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-693693 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-693693" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 08:16:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.235:8443
name: pause-227088
contexts:
- context:
cluster: pause-227088
extensions:
- extension:
last-update: Thu, 02 Oct 2025 08:16:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-227088
name: pause-227088
current-context: ""
kind: Config
users:
- name: pause-227088
user:
client-certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/pause-227088/client.crt
client-key: /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/pause-227088/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-693693

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-693693"

                                                
                                                
----------------------- debugLogs end: kubenet-693693 [took: 3.115539453s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-693693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-693693
--- SKIP: TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-693693 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-693693" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-562157/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 08:16:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.235:8443
name: pause-227088
contexts:
- context:
cluster: pause-227088
extensions:
- extension:
last-update: Thu, 02 Oct 2025 08:16:36 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-227088
name: pause-227088
current-context: ""
kind: Config
users:
- name: pause-227088
user:
client-certificate: /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/pause-227088/client.crt
client-key: /home/jenkins/minikube-integration/21643-562157/.minikube/profiles/pause-227088/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-693693

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-693693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693693"

                                                
                                                
----------------------- debugLogs end: cilium-693693 [took: 3.495606147s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-693693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-693693
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
Copied to clipboard